Despite AI Restrictions Coding Tools Remain Widespread in Businesses

Despite previous enthusiasm regarding AI tools, organizations have started to express concerns about the security threats that come from using them. They have begun imposing AI restrictions to try and limit the dangers, but employees continue to use these tools for work.

A big risk many have expressed with AI is the security threat and vulnerabilities that come from these tools. This can come from the issues with AI coding as it tends to generate less robust code and the risks of allowing other parties access to work.

Reasons for AI Restrictions

Reasons-for-AI-Restrictions

A recent Checkmarx report has shown a rise in concerns about AI tools being used among different organizations and businesses. The report shows that 15% of organizations they interviewed have banned the use of AI tools, especially in terms of coding.

However, about 99% of these companies believe that AI coding tools are still being used in their systems without their consent. The reason for this is that of the interviewed organizations, only 29% have established any form of governance or regulation about their AI restrictions. This makes enforcing the rules rather difficult.

This was part of Checkmarx’s report titled Seven Steps to Safely Use Generative AI in Application Security. They gathered this data from over 900 CISOs and security experts globally.

CISOs Struggle to Create AI Restrictions

The report shows that 70% of security experts agree a major issue is that AI tools lack a centralized system or strategy. Most companies simply obtain AIs on an ad-hoc basis. Sometimes different departments don’t even coordinate and a company ends up paying for multiple systems. 

They believe one thing AI restrictions should help the company find the right tools for coding through study and experimentation. This means building the right governance that determines success or not. 

Many organizations are open to working with AI and are even willing to allow AI tools to make changes to code without humans monitoring their activities. Their problem is they currently don’t trust the AI systems available. 

Most generative AI can’t follow secure coding practices or produce truly secure code. One reason is due to the AI’s tendency to hallucinate or create nonsensical answers that seem correct at first glance. This has already been a major problem for LLMs, but coding is even more challenging as the issues may not immediately be clear.

The issue is so serious that 80% of organizations’ primary concerns are these hallucinations and how they can pose a security threat. This requires a human coder to monitor and check their work before rolling it out. Others are suggesting training AI to act as a security tool and check for any flaws in the code.

“Enterprise CISOs are grappling with the need to understand and manage new risks around generative AI without stifling innovation and becoming roadblocks within their organizations. GenAI can help time-pressured development teams scale to produce more code more quickly, but emerging problems such as AI hallucinations usher in a new era of risk that can be hard to quantify.”

-Sandeep Johri, CEO at Checkmarx

How AI Restrictions Affect BPO Services?

Among the organizations that should pay close attention to these ideas is the BPO IT sector. With organizations like this handling coding projects across the world, they must understand the capabilities of AI.

Their AI usage must be highly efficient and organized like any other. BPO providers must also learn to systematize regular AI usage. These AI restrictions are critical in preventing the worst problems like hallucinations and bad code which are some of the main issues preventing AI adoption.

Perhaps most importantly, BPO IT providers have to be vigilant and monitor their operators to ensure they are following the rules. Even with regulations, it won’t matter if people are not following them. This should be a wake-up call for them to ensure AI is only used where it should be.