Team Wabbi
March 22, 2024
This article originally appeared on Forbes on March 22, 2024
Expert Panel® Forbes Councils Member
GETTY
Among the many professionals experimenting with the possibilities of generative artificial intelligence are developers. GenAI can speed up the code creation process and help devs tap into unique and innovative solutions. However, if overused or misused, GenAI can also lead to issues ranging from inadequate security to bias.
It’s essential for developers to remember to treat generative AI as a tool, not a full-fledged teammate. Below, 20 members of Forbes Technology Council share tips to help dev teams ensure they’re leveraging GenAI in safe, effective ways.
1. Start With Ensuring The Tool Is Updated And Secure
There are several important things developers should keep in mind. First, make sure the AI tool being used is secure and has the latest vulnerability patching, along with data protection standards. Provide clear, concise and specific commands to the tool detailing how you intend to use the code being generated, along with the security requirements. Spend time on code review to ensure important standards are met in terms of bias elimination, explainability and so on. – Naresh Mehta, Tata Consultancy Services Ltd.
2. Be Careful About IP And Copyright Infringement
Intellectual property and/or copyright infringement is the biggest concern when using AI-generated code. Most large AI vendors now indemnify users for generative AI usage; if a user is challenged on copyright grounds, the AI vendor will assume responsibility for the potential legal consequences. However, being indemnified requires traceability—the user must be able to prove a snippet of code in their system is indeed from the GenAI tool. – Vishwas Manral, Precize Inc.
3. Leverage AppSec Programs
Not all software is created equal; therefore, not all uses of AI-generated code are equal. To capture the maximum value of AI-generated code while protecting against security threats, organizations must have well-integrated application security programs (not just tools) as part of the software development life cycle so that they can manage risk tolerance and security protocols for each application as the code is being implemented. – Brittany Greenfield, Wabbi
4. Treat GenAI As An Intern
AI is currently a great assistant but a terrible teacher. Generative AI can be very useful for speeding up routine tasks, helping with the identification of bugs and quickly executing repetitive, well-defined standard procedures. The code must make sense throughout the entire project, not just one file; AI can easily miss that. Treat generative as an intern and engage with it accordingly. – David William Silva, Algemetric
5. Make Sure Your Prompts Are Clear
Ensure clarity in your objectives and precision in the questions you ask the AI. Vague queries lead to inconsistent standards. Providing specific examples or context helps refine AI outputs for safer, more effective code integration. – John Kuhn, Integral
6. Minimize Risk By Using GenAI As A ‘Co-Pilot’
GenAI is an accelerator for application security testing. It has the capability, for example, to scan code bases and suggest remediations, shorten the time a vulnerability exists, and suggest best practices. It is low-risk if utilized as a “co-pilot”—that is, as an assistant to a human security tester or developer who is reviewing large amounts of data (such as code). – Tony Velleca, CyberProof
7. Engage In Test-Driven Development
To mitigate risk, dev teams using AI-generated code must leverage test-driven development. Having the GenAI tool build highly specific test cases prior to tasking it with code generation provides a forcing function. Architects, engineers and developers must consider, with specificity, the expected outputs and define those requirements in GenAI-created test cases. – John Cho, Tria Federal
8. Use AI-Generated Code Solely As A Starting Point
One tip I share with my dev team is to use AI-generated code as a starting point or for ideation, not as the final product. This encourages developers to critically evaluate and refine the code further. They ensure that the introduction of AI tools complements existing workflows rather than disrupting them, focusing on enhancing developer productivity and code quality. – Jay Bhatty, NatGasHub.com
9. Give AI-Generated Code The Same Scrutiny As Human-Written Code
AI code-generation tools stand to significantly increase the volume of code developers are able to write and produce. However, while these tools can provide relief from growing software demands, they require at least the same level of scrutiny that would be given to code written by a human. Failing to review AI-generated code properly will lead to the introduction of technical debt and, eventually, rework. – Olivier Gaudin, SonarSource
10. Implement Strict Controls On Anything Built With GenAI
Development teams must remain vigilant and implement strict controls on anything built with GenAI so they can feel confident in exploring new ways that business users can leverage it. As users interact with generative AI, this is essential to prevent inadvertently sharing sensitive information, which could lead to data leaks, security breaches and compliance failures. – Michael Bargury, Zenity
11. Consider Both The Risks And The User Experience
The most strategic teams building with generative AI are evaluating how to change the quality of productivity with a security-first mindset. Platform-agnostic companies serving as the connective tissue between platforms hold an advantage if they consider both the risks and the user experience. You’re bringing data together across multiple platforms, then aggregating it to produce valuable insights. – Stephen Hsu, Calendly
12. Frame AI As An Accelerator, Not A Replacement
Empower developers to validate generative logic before a launch. Frame AI as an accelerator, not a replacement; strong human guardrails equal a controlled boost. Machines are here to augment human work, not replace it! – Ankit Virmani, Google Inc.
13. Always Include A Human Security Expert In The Loop
AI’s propensity for “hallucinations” and unseen vulnerabilities necessitates oversight by someone who’s well-versed in security. Assume that AI-generated code carries risks. You can utilize AI to review its own output for security flaws, but prioritize final validation by a security professional to ensure safety and effectiveness. – Ian Swanson, Protect AI
14. Know That Gaps In Knowledge Or Contextual Understanding May Need To Be Addressed
Developers tapping into GenAI’s potential for code creation must adhere to modern software craftsmanship principles. They should run code analysis tools on it to avoid security issues creeping in. It should not be blindly inserted without proper human review. Carefully go through the code with the mindset that gaps in knowledge or contextual understanding may need to be addressed before use. – Joseph Ours, Centric Consulting
15. Focus On Alignment With Your Intent Rather Than Stylistic Elements
Leveraging GenAI can yield real time savings, especially if you select a tool that will learn from your edits. I would always argue that it’s better to focus on evaluating and approving generated code based on its alignment with your original intent rather than being too focused on the stylistic elements, even if there are specifics that need reworking. – Al Kingsley, NetSupport
16. Perform Regular Security Audits
As with all human-written code, it is paramount to perform regular security audits of AI-generated code and to understand what was generated, as it is critical to ensure it meets relevant compliance standards to mitigate risks across production environments. – Antti Nivala, M-Files
17. Establish An ‘AI Operations Core’
Developers should establish an “AI operations core” for code generation, ensuring inputs and outputs are sanitized and secure. Use only reputable AI models and tools with strong industry acceptance to mitigate risks, such as model tampering or poisoning. This security-first stack fosters safe, effective AI integration, protecting against emerging threats while leveraging AI’s full potential. – Christopher Daden, Criteria Corp.
18. Consider All Forms Of Bias
Considering all forms of bias is imperative for maximizing AI’s benefits while minimizing its shortcomings. Make sure you construct development teams with diverse voices and skill sets and a curiosity for experimentation. These teams will be responsible for training the models with unbiased, clean data and stress-testing them to ensure the outcomes generated are both accurate and ethical. – Jeff Wong, EY
19. Implement Explainable AI Frameworks
Implement explainable AI frameworks to enhance transparency around how generative AI models produce code. This approach helps teams understand decision-making processes, ensuring the AI-generated code aligns with safety and ethical standards, and facilitates troubleshooting and refinement for effective deployment. – Przemek Szleter, DAC.digital
20. Abstract Project Requirements To Protect Sensitive Information
When using generative AI for code generation, it’s essential to abstract your project requirements to protect sensitive information. Always ensure the AI-generated code aligns with similar, not exact, requirements to prevent data exposure. Thoroughly review and test the AI-generated code for security and integration. This approach safeguards your project while leveraging AI’s efficiency. – Amitkumar Shrivastava, Fujitsu
“
Not all software is created equal; therefore, not all uses of AI-generated code are equal. To capture the maximum value of AI-generated code while protecting against security threats, organizations must have well-integrated application security programs (not just tools) as part of the software development life cycle so that they can manage risk tolerance and security protocols for each application as the code is being implemented.”
Related Articles
Mistakes To Avoid Before And After A Ransomware Attack – Forbes –
This article originally appeared on Forbes on August 9, 2024 Expert Panel® Forbes Councils Member Forbes Technology Council COUNCIL POST| Membership (Fee-Based) getty With recent high-profile attacks targeting organizations ranging from healthcare systems to retailers...
Shifting with Confidence: How Wabbi Makes Security an Integral Part of DevOps
In today's digital landscape, secure applications are no longer a luxury, they're a necessity. Breaches are on the rise, and development teams are struggling to keep pace with the ever-evolving threat landscape. But what if there was a way to seamlessly integrate...
Shifting with Confidence: How Wabbi Makes Security an Integral Part of DevOps
In today's digital landscape, secure applications are no longer a luxury, they're a necessity. Breaches are on the rise, and development teams are struggling to keep pace with the ever-evolving threat landscape. But what if there was a way to seamlessly integrate...
0 Comments