Skip to main content
IT Operations

Microsoft’s future forecast: cloud and AI

At Microsoft’s Ignite conference last month, the tech behemoth announced a number of continuing AI and cloud investments.
article cover

Sean Gallup/Getty Images

3 min read

Microsoft’s cloud and AI businesses continue to grow, and with that growth comes a need for more security.

“Generative AI is a complete game-changer for applications,” John Montgomery, Microsoft’s corporate VP of program management of its AI platform, told IT Brew in November. “I’ve been at Microsoft 25 years; I’ve never seen customers adopt a technology this quickly.”

At Microsoft’s Ignite conference last month, the company announced a number of continuing investments in AI and the cloud, including expansions in its Copilot AI productivity booster and Azure products. That followed comments weeks earlier by CEO Satya Nadella during an earnings call noting company growth in AI and cloud computing, notably the use of its Copilot feature in Microsoft proper and subsidiary GitHub.

“We are off to a strong start to the fiscal year driven by the continued strength of Microsoft Cloud, which surpassed $31.8 billion in quarterly revenue, up 24%,” Nadella told investors on the call. “With Copilot, we are making the age of AI real for people and businesses everywhere.”

On track. Recent advances in prompt engineering from GitHub exemplify the potential of generative AI in computer programming and productivity, which is accelerating across Microsoft platforms and subsidiaries.

With all that expansion, safety is essential. During a briefing at GitHub Universe in early November, GitHub CEO Thomas Dohmke told reporters that as a Microsoft subsidiary, his firm works with the parent company to ensure it meets obligations.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

“There’s a lot of security compliance requirements that come from both our large enterprise customers and from Microsoft large enterprise customers,” Dohmke said.

Ignition, engage. Microsoft’s AI developments led the company’s slate of Ignite announcements, and defensive deployment and safety were of paramount importance in a threat landscape growing more dangerous by the day. Microsoft’s Montgomery told us that the company is deploying “defense layers” to handle potential security incidents and moderation needs.

Azure AI Content Safety is one such layer, Montgomery said. The online moderation tool runs off internal Microsoft Copilots to detect threats, using models that can update quickly and be tuned and customized as needed. It’s part of an AI-informed approach to security and moderation.

“Whether it’s in Office, or Bing, or GitHub, they all have safety systems that surround them; those safety systems look at the incoming prompts that are going into the model, and the AI systems filter that information,” Montgomery said. “They look at the generated output, and they filter that information as well—that’s how we help get a defense in depth on top of the models.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.