web analytics

The risks of entry-level developers over relying on AI – Source: www.csoonline.com

Rate this post

Source: www.csoonline.com – Author:

As AI-generated code becomes more common, some CISOs argue that overreliance could erode developers’ critical skills which can create blind spots organizations shouldn’t ignore.

Whenever tools like ChatGPT go down, it’s not unusual to see software developers step away from their desks, take an unplanned break, or lean back in their chairs in frustration. For many professionals in the tech space, AI-assisted coding tools have become a convenience. And even brief outages, like the one that happened on 24 March 2025, can bring development to a halt.

“Time to make a coffee and sit in the sun for 15 mins,” a Reddit user wrote. “Same,” another responds.

Overreliance on generative AI tools like ChatGPT is steadily growing among tech professionals, including those working in cybersecurity. These tools are changing how developers write code, solve problems, learn, and think — often boosting short-term efficiency. However, this shift comes with a trade-off: developers risk weakening their coding and critical thinking skills, which can ultimately have long-term consequences for both them and the organizations they work for.

“We’ve observed a trend where junior professionals, especially those entering cybersecurity, struggle with deep system-level understanding,” says Om Moolchandani, co-founder and CISO/CPO at Tuskira. “Many can generate functional code snippets but struggle to explain the logic behind them or secure them against real-world attack scenarios.”

A recent survey by Microsoft backs Moolchandani’s observations, highlighting that workers who rely on AI to do part of their job tend to engage less deeply in questioning, analyzing, and evaluating their work, especially if they trust that AI will deliver accurate results. “When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship,” the paper reads.

As AI code generators change how developers work, they also reshape how organizations function. The challenge for cybersecurity leaders is to leverage this technology without sacrificing critical thinking, creativity, and problem-solving, the very skills that make developers great.

Short-term wins, long-term risks

Some CISOs are concerned about the growing reliance on AI code generators — especially among junior developers — while others take a more relaxed, wait-and-see approach, saying that this might be an issue in the future rather than an immediate threat. Karl Mattson, CISO at Endor Labs, argues that the adoption of AI is still in its early stages in most large enterprises and that the benefits of experimentation still outweigh the risks. 

“I haven’t seen clear evidence that AI reliance is leading to a widespread decline in fundamental coding skills,” he says. “Right now, we’re in a zone of creative optimism, prototyping, and finding early successes with AI. A decline in core fundamentals still feels quite a way down the road.”

Others are already seeing some of the effects of overreliance on AI tools for writing code. Sean O’Brien, founder of Yale Privacy Lab and CEO and founder of PrivacySave, voices strong concerns about the growing dependence on generative AI. Developers who rely heavily on AI-powered tools like ChatGPT or low-code platforms “often encourage a ‘vibe coding’ mentality, where they are more focused on getting something to work than actually understanding how or why it works,” O’Brien says.

Aviad Hasnis, CTO at Cynet, is also worried, particularly when it comes to junior professionals, who “rely heavily on AI-generated code without fully grasping its underlying logic.” According to him, this overreliance creates multiple challenges for both individuals and organizations. “Cybersecurity work demands critical thinking, troubleshooting skills, and the ability to assess risks beyond what an AI model suggests,” he says. 

While relying on AI code generators can provide quick solutions and short-term gains, over time this dependency can backfire. “As a result, developers may struggle to adapt when AI systems are unavailable or insufficient, potentially rendering them ineffective as innovators and technologists in the long run,” says Oj Ngo, CTO and co-founder of DH2i.

The risks of blind spots, compliance and license violation

As generative AI becomes more embedded in software development and security workflows, cybersecurity leaders are raising concerns about the blind spots it can potentially introduce.

“AI can produce secure-looking code, but it lacks contextual awareness of the organization’s threat model, compliance needs, and adversarial risk environment,” Moolchandani says.

Tuskira’s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default.

Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. “Many AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,” O’Brien says. 

Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses. “This is particularly dangerous in the context of software development for cybersecurity tools, where compliance with open-source licensing is not just a legal obligation but also impacts security posture,” O’Brien adds. “The risk of inadvertently violating intellectual property laws or triggering legal liabilities is significant.”

From a technological perspective, Wing To, CTO at Digital.ai, points out that AI-generated code should not be seen as a silver bullet. “The key challenge with AI-generated code — in security and other domains — is believing that it is of any better quality than code generated by a human,” he says. “AI-generated code runs the risk of including vulnerabilities, bugs, protected IP, and other quality issues buried in the trained data.”

The rise in AI-generated code reinforces the need for organizations to adopt best practices in their software development and delivery. This includes consistently applying independent code reviews and implementing robust CI/CD processes with automated quality and security checks.

Changing the hiring process

Since generative AI is here to stay, CISOs and the organizations they serve can no longer afford to overlook its impact. In this new normal, it becomes necessary to set up guardrails that promote critical thinking, foster a deep understanding of code, and reinforce accountability across all teams involved in any kind of code writing.

Companies should also rethink how they evaluate technical skills during the hiring process, particularly when recruiting less experienced professionals, says Moolchandani. “Code tests may no longer be sufficient — there needs to be a greater focus on security reasoning, architecture, and adversarial thinking.” 

During DH2i’s hiring process, Ngo tells they assess candidates’ dependence on AI to gauge their ability to think critically and work independently. “While we recognize the value of AI in enhancing productivity, we prefer to hire employees who possess a strong foundation in fundamental skills, allowing them to effectively use AI as a tool rather than relying on it as a crutch.”

Don Welch, global CIO at New York University, has a similar perspective, adding that the people who will thrive in this new paradigm will be the ones who stay curious, ask questions, and seek to understand the world around them as best as they can. “Hire people where growth and learning are important to them,” Welch says.

Some cybersecurity leaders fear that becoming over reliant on AI can widen the talent shortage crisis the industry already struggles with. For small and mid-sized organizations it can become increasingly difficult to find skilled people and then help them grow. “If the next generation of security professionals is trained primarily to use AI rather than think critically about security challenges, the industry may struggle to cultivate the experienced leaders necessary to drive innovation and resilience,” Hasnis says.

Generative AI must not replace coding knowledge

Early-career professionals who use AI tools to write code without developing a deep technical foundation are at a high risk of stagnating. They might not have a good understanding of attack vectors, system internals, or secure software design, says Moolchandani. “Mid-to-long term, this could limit their growth into senior security roles, where expertise in threat modelling, exploitability analysis, and security engineering is crucial. Companies will likely differentiate between those who augment their skills with AI and those who depend on AI to bridge fundamental gaps.”

Moolchandani and others recommend organizations increase their training efforts and adjust their methods of transferring knowledge. “On-the-job training has to be more hands-on, focusing on real-world vulnerabilities, exploitation techniques, and secure coding principles,” he says.

Mattson says that organizations should focus more on helping employees gain relevant skills in the future. Technology will evolve quickly and training programs alone may not be enough to keep pace. “But a culture of continuous skill improvement is durable for any change that comes,” Mattson adds.

These training programs should help employees understand both the strengths and limitations of AI, learning when to rely on these tools and when human intervention is mandatory, says Hasnis. “By combining AI-driven efficiency with human oversight, companies can harness the power of AI while ensuring their security teams remain engaged, skilled, and resilient,” he says. He advises developers to always question AI outputs, especially in security-sensitive environments.  

O’Brien also believes that AI should go hand in hand with human expertise. “Companies need to create a culture where AI is seen as a tool: one that can help but not replace a deep understanding of programming and traditional software development and deployment,” he says.

“It’s essential that companies don’t fall into the trap of just using AI to patch over a lack of expertise.”

SUBSCRIBE TO OUR NEWSLETTER

From our editors straight to your inbox

Get started by entering your email address below.

Original Post url: https://www.csoonline.com/article/3951403/the-risks-of-entry-level-developers-over-relying-on-ai.html

Category & Tags: Development Tools, Generative AI, IT Skills – Development Tools, Generative AI, IT Skills

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post