Blog
Tips

Trusting your GenAI code: Practical tips for engineering leaders

Mar 13, 2024
min read
Share
X

There’s a lot of pressure to adopt GenAI code. But there’s also uncertainty about the risks involved. Ultimately the decision for when, whether, and how to use GenAI in the software development lifecycle (SDLC) comes down to the question of trust

As GenAI becomes mainstream and more developers use it, engineering leaders may not have a choice but to address the issue head-on. By ignoring GenAI, your organization could fall behind competitors or miss out on innovation opportunities.

To stay relevant in the market, you’ll need to establish a basis of trust for the use of AI within your organization.

Trusting GenAI code: Simple suggestions

The specifics of what trust looks like are different at every organization. Defining those specifics requires discussion with stakeholders that will likely include a mix of perspectives: regulatory affairs experts, c-suite leadership, general counsel, and developers who embrace code as craft.

It means asking questions, encouraging dialogue, asking follow-up questions, and then integrating diverse perspectives into policies and guidelines.

Here are a few suggestions to navigate these conversations.

1. Know the compliance landscape

The challenge here is that the regulatory landscape is becoming defined. It’s understandable, given that innovation tends to move faster than legislation. At the same time, moving too fast — without careful consideration — means that your codebase risks becoming obsolete. 

Even though the picture isn’t clear, it’s critical to anticipate what’s coming. As the compliance landscape begins to take shape, your organization can gain a competitive advantage by monitoring themes and trends. There are a few key details to track.

  • Your company’s legal nexus based on the locations of your headquarters, employees, and customers.
  • Conversations taking place among regulators.
  • How regulatory decisions are being evaluated within court systems.

Knowing the compliance landscape can help your team operate from a position of wisdom. Keep in mind, many ahead-of-the-curve leaders are already defining protocols and policies based on themes alone rather than definitive decisions.

2. Know exactly how you’re using GenAI code

Clearly defined parameters can help your engineering team (1) avoid potential problems early and (2) establish clear protocols for audit/troubleshooting should issues come up. You won’t need to chase down challenges, issues, or problems but rather, you can address them systematically.

George Westerman, Senior Lecturer in MIT Sloan School of Management and founder of the Global Opportunity Forum in MIT’s Office of Open Learning, recently led a team of researchers to publish an article in Harvard Business Review about this topic.

“Start with the problem, not the technology. Wielding a (generative AI) hammer, everything starts to look like a nail,” his team writes. “But, instead of asking how to do generative AI in your company, ask what you need to accomplish. Yes, AI can help explore, predict, optimize, and recommend. But not every problem is an AI problem.”

The team further cites an observation from Tom Peck, Chief Information and Digital Officer of Sysco. 

 “I don’t need a generative AI strategy. What I need is an automation strategy. A lot of things…can be solved with more basic or traditional automation capabilities.” Starting with the problem clarifies which tool you need.

3. Build measurement standards for %GenAI code tied to anticipated risk

The key here is to apply a percent-threshold for the use of GenAI, with respect to a particular task or workflow. In some situations, that metric may be zero. In other situations, the metric will be higher. Pragmatically speaking, especially for companies subject to United State Copyright Law, that number is unlikely to ever be 100%. 

The answer depends on (1) intellectual property protections such as patents and trademarks for your codebase and (2) how the courts choose to define the construct of “sufficient human authorship.”

Should the court system establish new standards, a historical system of record can help your organization expedite remediation efforts and prevent technical debt as a result.

4. Develop best practices for integrating human & GenAI coding efforts

Right now, there’s a lot of new lexicon coming up in the GenAI coding landscape. One construct is pure vs. blended GenAI code. “Pure” refers to code that’s 100% GenAI. “Blended” refers to code that requires human input.

It’s understandable that these word choices may likely change so for now, it’s a good idea to handle them colloquially. Speaking of, here’s how ChatGPT addresses the topic, following a series of prompts:

  • Blended code, at its core, is the fusion of generative artificial intelligence (AI) with human-written code, reshaping the landscape of software development. This innovative approach harnesses the computational power of AI to complement and enhance the creativity and intuition of human developers.
  • Blended code represents a symbiotic relationship between AI and human developers. While AI automates repetitive tasks, suggests optimizations, and accelerates development cycles, human developers leverage their expertise to make high-level design decisions, solve complex problems, and ensure the integrity of the final product.
  • One of the most promising applications of blended code is in code synthesis and refactoring. AI analyzes existing codebases, identifies inefficiencies, and suggests optimizations, leading to cleaner, more maintainable code. This results in improved software quality and faster development cycles.

However, there are challenges to consider.

  • Ethical considerations. Integrating generative AI raises ethical concerns regarding privacy, fairness, and transparency, necessitating careful consideration of how AI models are trained and whether they perpetuate biases.
  • Bias in generated code. Generative AI may inadvertently replicate biases present in the training data, leading to the generation of biased code that perpetuates stereotypes or discriminates against certain groups, requiring developers to mitigate biases through careful curation of training data and implementation of fairness-aware algorithms.
  • Transparency and interpretability. Ensuring transparency and interpretability in AI-generated code is crucial for debugging and maintaining systems, necessitating techniques such as model explainability and interpretability to shed light on the inner workings of AI models.
  • Security concerns. Generative AI introduces security concerns as AI-generated code may contain vulnerabilities or loopholes that could be exploited by malicious actors, requiring rigorous testing and validation to ensure compliance with security standards.
  • Regulatory compliance. Developers must consider regulatory requirements and industry standards when using generative AI in software development to ensure compliance and avoid legal repercussions.

Final thoughts

The nature of the AI industry is that it is in a perpetual state of change, in which the effects of decision-making will compound exponentially. Adaptability will require the right balance of (1) making the right judgment calls (2) planning ahead for the future based on incomplete information. The right tools, conceptually and technically, can help you get there.

Keeping track of global GenAI compliance standards 

Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s GenAI Code compliance Database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.

You can sign up to receive the newsletter here.

About Sema Technologies, Inc. 

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.

Disclosure

Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.

Table of contents

Gain insights into your code
Get in touch

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.