Loading...
News, analysis, and insights for IT leaders navigating the risks and rewards of AI.
January 30, 2024
Prompt injection, prompt extraction, new phishing schemes, and poisoned models are the most likely risks organizations face when using large language models.
Microsoftâs LLM orchestration tools might finally deliver on the three-decade-old promise of autonomous software agents.
The free version of Microsoftâs generative AI chatbot is available in Windows and on the web. Hereâs how to make the most of it.
AI Assist lets IT teams use natural language prompts in Forward Networks' flagship digital-twin platform to more quickly identify configuration issues and security vulnerabilities across network devices.
Thinking about getting your graduate degree in artificial intelligence? Here are 10 of the top schools with AI degrees worth pursuing.
Great documentation is important for humans, but more so for machines. The concept of âtiered documentationâ means that both developers and LLMs get what they need.
Wine Enthusiast, an online retailer of all things wine, used genAI to monitor tens of thousands of customer calls, gaining insights into why consumers were calling and using that data to quickly catch and fix product defects.
Three powerful new approaches have emerged to improve the reliability of large language models by developing a fact-checking layer to support them.
Azure AI Studio, while still in preview, checks most of the boxes for a generative AI application builder, with support for prompt engineering, RAG, agent building, and low-code or no-code development.
© 2024