Blog
Product Updates

Trustworthy AI reading list: Jan 28

Jan 29, 2024
8
min read
Share
X

What are world leaders thinking, saying, and doing about Generative AI? And what, exactly, is AI capable of doing for us? The story is unfolding before our eyes.

AI at WEF: 40% of global workforce exposed to AI

Source: World Economic Forum

At the global gathering of the world’s political and business elite, AI was a subject on the lips of many that attended. Kicking things off was data released by the International Monetary Fund (IMF) that said 40% of the global workforce is exposed to artificial intelligence, rising to 60% in advanced economies. 

Among those affected, those with a college education and women are likely to be most directly impacted by what was dubbed the Fourth Industrial Revolution. But it wasn’t all doom and gloom. 

"AI can solve really hard, aspirational problems that people maybe are not capable of solving," said Daphne Koller, Founder and CEO at Insitro Inc. the San Francisco based drug manufacturer. This sentiment was echoed by U.S. Senator Mike Rounds who said AI could “transform healthcare in the US.” 

But others said more needed to be done on the regulatory side to ensure the benefits are evenly distributed. During a panel discussion AI Regulation with Microsoft President Brad Smith, Director of the White House Office of Science and Technology Policy Arati Prabhakar, Vera Jourová, Vice-President for Values and Transparency at the European Commission, and Josephine Teo, Singapore's Minister for Communications and Information, each member called for a convergence on how regulators approach the industry.

With regulation set to materialise this year, understanding how AI will be controlled - and by what rules - will continue to be an important issue. 

Google’s new AI System can solve complex geometry

Source: MIT Technology Review

While text based AI models have been flourishing, ones capable of understanding the complexities of mathematics and geometry have lagged behind - thanks to a lack of training data. But Google’s DeepMind division believes they have found a way around it.

The company’s newest program, AlphaGeometry combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions. This new model, which brings stricter logic to how the AI makes deductions has been put through its paces, with surprising results. 

AlphaGeometry has been tested on dozens of geometry problems at the same level of difficulty found at the International Mathematical Olympiad, a competition for top high school mathematics students. It completed 25 within the time limit, beating the last state-of-the-art system, which managed to complete only 10. 

The implications for AI that can do maths, means it could be used on problems that require a much higher adherence to logic, including computer vision, architecture, and even theoretical physics. 

Singapore releases new GenAI regulation to help steer global regulation in the right direction

Source: ZDNET

Regulators in Singapore have introduced new draft legislation specifically addressing GenAI. The draft document identifies six key risks with GenAI, including hallucinations, copyright challenges, and embedded biases. 

The theme of the new legislation is to ensure that AI-powered decisions should be explainable, transparent and fair, while also suggesting that AI model security is beefed up to prevent hackers from pulling sensitive data from the models without permission. 

As discussions in Davos revealed, global regulation around GenAI is fragmented, but Singapore is hoping to take the lead. "This proposed framework aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally."

LexisNexis deploys combined GenAI to help give it the edge

Source: CIO.com

LexusNexis, the data analytics company has developed its own hybrid GenAI suite that helps solve one of the core problems in current LLMs: hallucinations. 

Lexis+ AI, has stitched together ChatGPT-4 in Azure as well as Anthropic in AWS into a single transaction. Meaning when customers use its AI platform, answers are sourced from both, with the optimal one being supplied to the customer. 

On top of that, citations and legal references are included in the answer, helping customers access the most up to date legal precedents, and remove hallucinations, or biases taken from the data set it was trained on.   

An FDA for AI?

Source: Tech Policy Press

A fascinating discussion between Merlin Stein and Connor Dunlop, the authors of a new report published by the Ada Lovelace Institute titled Safe before sale: Learnings from the FDA’s model of life sciences oversight for foundation models, can be found over at Tech Policy Press. 

The nub of the talk is how and why the Food and Drug Administration’s framework can offer a useful foundation for what a regulator might look like for artificial intelligence. The reason? FDA has a long history where it looked into the oversight of novel and high-risk technologies, by creating a framework that starts from from discovery and development through research and all the way through to post-market monitoring. 

That model, says Stein and Dunlop, could be useful for assessing where and how risk might appear during AI’s development, deployment and usage and ensuring regulation can hold projects to account.

Rule makers struggle to keep with GenAI advancements

Source: McKinsey

McKinsey’s Risk and Resilience team has taken a thoughtful look into how regulators have been slow to keep up with the pace of change in GenAI, while providing a useful overview of current regulations in different parts of the world.   

While no country has passed comprehensive AI or gen AI regulation to date, common themes are emerging across all countries currently working towards regulating the industry. These are: 

  • Transparency - Understanding what goes into teaching GenAI models. 
  • Human agency and oversight - How to keep humans in the loop.
  • Accountability - Regulators want to see companies do more to demonstrate they are aware of the responsibility that comes with building such powerful tools. 
  • Technical robustness and safety - Rule makers want to ensure AI systems operate as expected, remain stable, and can rectify user errors. 
  • Diversity, nondiscrimination, and fairness - Ensure that AI systems are free of bias and that the output does not result in discrimination or unfair treatment of people.
  • Privacy and data governance -  Companies need to do more to comply with existing privacy and data protection rules.
  • Social and environmental well-being - Regulators want to create better guard rails for AI’s impact not only on society, but the environment, too. 

The consultancy concludes that companies that embrace safeguards and compliance are likely to have a competitive edge as regulation becomes de-rigeur across international markets. 

Business under the spotlight for managing new technology

Source: Edelman

While global leaders rubbed shoulders in Davos, the research arm of Edelman, a global communication firm, released its report on innovation, technology, and trust in society. 

Among its findings was a shift in trust towards global companies, who consumers fear are pursuing profits over and above the needs and wants of society at large. Governments too, have work to do to convince citizens that it has the right resources to manage and regulate new technologies, including AI. 

But it wasn’t all doom and gloom. The survey, which polled more than 30,000 people across dozens of countries, found respondents were more likely to embrace innovation if they are confident it will lead to a better future. It also found the business still has a key role to play when it comes to innovation, but they need to take greater responsibility to make sure innovations are safe, understood and accessible. 

Code Compliance Newsletter

Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s code compliance database. Topics include the current state of regulation and AI, updates in standards and compliance and more. Sign up to receive the newsletter here.

About Sema Technologies, Inc. 

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.

Disclosure

Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.

Table of contents

Gain insights into your code
Get in touch

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.