Modern threats require modern solutions. Secure your entire org with Threatlocker’s enterprise-level security software. Prevent ransomware and reduce your risk of cyberattacks with zero-trust endpoint protection.
What to do about AI? It’s a question on the minds of governments, private sector stakeholders, and consumers as machine intelligence continues to play an outsize role in the tech sector.
For tech consultant Katie O’Neill, author of What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast, the answer is found in regulation. It’s not a popular perspective for the business of tech, but as she told IT Brew, that doesn’t mean there’s no place for it. Just because guardrails can be onerous, doesn’t mean they’re going to stifle innovation.
“There’s a difference between acceleration as we experience it, driven by Silicon Valley and frontier models within AI…versus what I call ethical acceleration, which is going as fast as you can within the bounds of what is known to be safe and what you know is not going to exceed your understanding of consequences,” O’Neill said. “And that [is what] I don’t think we’re doing very well at present.”
Timing is everything. Indeed, regulatory decisions on the part of the EU have drawn harsh criticism from US tech firms like Meta and Google. The AI Act, a bill passed in 2021 by the EU’s legislative body the European Commission, came a year prior to ChatGPT’s debut. Dorothy Chou, Google DeepMind’s head of public policy, told CNBC that governments are “regulating on a time scale that doesn’t match the technology.”
That is the message from the current White House. Vice President JD Vance, speaking to a French AI summit in February, told European leaders that the administration feels that “excessive regulation of the AI sector could kill a transformative industry.”
Regulations can be a headache for both tech teams and company leadership. And there’s no way to avoid AI, which Gallagher Managing Director of Cyber Liability John Farley told CFO Brew in February means that organizations need AI governance plans in place at minimum. Those, he said, can manage “who can access these AI tools and for what purpose.”
Keeping control of access is part of making sure things are safe and secure. Chuck Herrin, field CISO at F5, has seen AI adoption sweep across the industry. He said that security needs to be a priority, and that it’s essential not to fall behind.
“We can’t be slow here,” Herrin told IT Brew. “The adoption curve for AI is like something we’ve never seen before.”
Institutional challenges. Part of the problem, O’Neill said, is that the “move fast and break things” mentality that drives the tech industry’s approach to innovation and development at private companies is not ethical when it involves healthcare or Social Security. The acceleration of AI means that care has to be taken to ensure that there’s a degree of control.
“One data set, one day becomes the trained algorithm of the next day, becomes the underpinnings of a large language model the next day, and then there’s just no pulling it back,” O’Neill said.
But that raises questions for developers, IT teams, and others who have to balance the demands of their organizations with the reality of what’s expected. Clarity is important. With AI regulations varying from country to country—and internally, from company to company—IT teams and other tech professionals on staff just need to know where the lines are.
“This is where companies need to develop that internal governance and create those risk-assessment frameworks and make sure they’re clear and communicated,” O’Neill said.