Stay on top of the latest business innovations and help support quality journalism. Sign up for a subscription today. To remind you, our annual plan works out to a monthly rate of €24.99+ VAT. It will give you access to a archive of over 1000 independently reported stories and some 200 new ones in 2023.
Enjoy this week's issue,
Innovator Founder and Editor-in-Chief Jennifer L. Schenker |
|
- N E W S I N C O N T E X T - |
|
Some 300 million full-time jobs around the world could be automated in some way by the newest wave of AI that has spawned large language models like ChatGPT, according to a report on AI's impact on employment by Goldman Sachs economists released this week. In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, the bank estimates. The Goldman Sachs report was released as the Organization for Economic Cooperation and Development (OECD) was hosting a virtual conference that examined AI’s impact on work, innovation, productivity and skills. During the March 27-30 conference the OECD discussed the findings of its own survey on the impact of AI on work. Its survey involved more than 2000 employers and 5,300 workers in seven countries and covered two sectors: manufacturing and financial services. Read on to get the key takeaways from the OECD conference and this week's most important technology stories impacting business. |
|
In 2018 France’s President Emmanuel Macron launched what could be considered a kind of digital call to arms: Find European models for development of technologies like AI that encompass the Continent’s values.
That challenge is more urgent than ever at a moment in time when generative AI large language models (LLMs) are going mainstream while the companies behind them simultaneously downsize their responsible AI teams. Since 2017 an estimated 73% of AI foundation models have come from the U.S., where development is mainly driven by large technology companies, and 15% from China. As uptake of these LLM models goes mainstream Europe risks becoming increasingly dependent on foreign AI models,
The hope is that not just Europe’s tech industry, but the general population, might be inspired by the idea of using AI to protect data privacy and the common good. “AI might serve as the brick that creates a unified market,” says Laurent Daudet, Founder and CEO of LightOn, a Paris-based startup that offers enterprise customers an alternative to U.S. LLMs such as OpenAI’s GPT-3. The opportunity is there. But industry observers say it is unclear in this era of exponential change if Europe can create better LLMs with the speed and scale that is necessary. |
|
- I N T E R V I E W O F T H E W E E K - |
|
Who: Amy Webb is the founder of the Future Today Institute, a foresight and strategy firm that helps leaders and their organizations prepare for complex futures. She pioneered a data-driven, technology-led foresight methodology that is now used within hundreds of organizations. Webb is a professor of strategic foresight at the NYU Stern School of Business, where she developed and teaches the MBA course on strategic foresight. A regular speaker at SXSW, she is the author of several popular books including The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity; The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream; and The Genesis Machine, which explores the futures of synthetic biology
Topic: What the future looks like and how corporates do a better job preparing for it. Quote: "As digital and physical realities become intertwined innovation teams in many industries will have an opportunity to pursue new moonshots."
|
|
- S T A R T U P O F T H E W E E K - |
|
Amsterdam-based Zeta Alpha is building a next-generation enterprise search and insights platform, a kind of co-pilot for knowledge workers powered by natural language processing and a proprietary made-in-Europe large language model (LLM). It is working with chemical companies, banks, and consultancies. “It’s a new era,” says founder and CEO Jakub Zavrel. “The whole field of natural language processing has been redefined by generalized highly capable large language models which give all the functionality of a knowledge graph on the fly.” Generative AI can then summarize and digest the information for users. |
|
- N U M B E R O F T H E W E E K |
|
Length of a proposed moratorium on the development of AI systems more powerful than GPT-4 proposed this week in an open letter drafted by The Future of Life Institute and signed by more than 1800 tech leaders including Elon Musk and Apple Co-founder Steve Wozniak. The Future of Life Institute is a nonprofit organization based in Cambridge, Massachusetts that campaigns for the responsible and ethical development of artificial intelligence. “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” says the open letter. "As stated in the widely-endorsed Asilomar AI Principles advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The letter suggests the six month pause by all AI labs should be “public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” |
|
|
|