PUZZLE X is coming back to Barcelona Nov 7-9 bringing together 3000 visionaries and technologists from 80+ countries to explore new horizons of exponential technologies. Click here for your complimentary pass courtesy of our partnership with PUZZLE X. Passes are limited so act fast.
Jennifer L. Schenker Innovator Founder and Editor-in-Chief |
|
- N E W S I N C O N T E X T - |
|
Governments around the world grappled this week with how to best rein in AI. As the EU AI Act nears completion U.S. President Joe Biden issued an Executive Order on AI, the UK held a global AI Safety Summit and issued the Bletchley Declaration, and the G7 released Advanced AI principles, Code of Conduct and Leaders Statement. These developments follow news last week that the United Nations is creating an AI Advisory Body, which brings together global expertise from governments, business, the tech community, civil society, and academia. AI has captured the attention of governments because there is potential in the future for serious, even catastrophic, harm stemming from the most significant capabilities of AI models as well as risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections. and weaken personal privacy. Both money and power are on the line. The largest tech companies would prefer to self-regulate and are attempting to influence how their AI models might be governed. Some observers accuse them of using proposed legislation to either lock in advantages or to slow the market down until they catch up. Governments have their own agendas. Elon Musk, who participated in the UK AI Safety Summit along with other U.S. tech leaders, summed up governments own conflicting interests via a post published on X, the social media platform formerly known as Twitter which he now owns. Several hours ahead of the summit, Musk posted a cartoon on X that appeared to show the UK, US, Europe and China verbalizing the risks of AI to humankind while each was secretly thinking about their desire to develop it first. Against that backdrop, critics say government actions announced this week so far amount to little more than the equivalent of corporate social responsibility goals with no teeth. The EU has a "moral" obligation to step up and fill the vacuum by "creating simple but concrete rules delineating clearly what levels of safety, risk mitigation, reliability and quality upstream providers of foundation models should achieve when developing their models," Nicolas Moës, an economist by training focused on the impact of General-Purpose Artificial Intelligence (GPAI) on geopolitics, the economy and industry, said in an interview with The Innovator. He is the Director for European AI Governance at global think tank The Future Society where he studies and monitors European developments in the legislative framework surrounding AI. (See The Innovator's Interview Of The Week). Read on to learn more about this story and the week's most important technology news impacting business. |
|
Stay on top of the latest business innovations and support quality journalism. Subscribe to get unlimited access to all of The Innovator's independently reported articles. |
|
Aurubis, a Hamburg, Germany-based global supplier of non-ferrous metals and one of the world's largest copper recyclers, processes input materials like complex metal concentrates, scrap, organic and inorganic metal-bearing recycling materials, and industrial residues into metals. Now it is intends to add a new revenue stream to the mix: recycling the valuable metals in electric car batteries such as lithium, nickel and cobalt as well as graphite from the black mass that is produced when batteries are crushed by other participants in the value chain. Black mass is a powdery residue that forms when lithium-ion batteries are dismantled and shredded. The metals recovered in the Aurubis recycling process can then be used for new batteries and other products. “Our intention is to close the loop with these strategically relevant metals,” says Ken Nagayama, Aurubis’ head of business development, battery materials. (Nagayama is pictured here inside Aurubis’ pilot plant). The company intends to build a commercial-scale recycling plant in the second half of the decade when there are sufficient EV batteries ready for recycling.
Aurubis is one of a growing number of companies participating in CIRCULAR REPUBLIC, an initiative started in January by UnternehmerTUM in Munich, Europe’s largest center for innovation and business creation.
“If we want to transform a sector we need all hands-on deck, including end-consumers, manufacturers, clients and logistic actors and they all need to work together at the same time. That requires incentive alignment and practical implementation,” says CIRCULAR REPUBLIC Co-founder Niclas Mauss. “That’s where we come in.” |
|
- I N T E R V I E W O F T H E W E E K - |
|
Who: Nicolas Moës is an economist by training focused on the impact of General-Purpose Artificial Intelligence (GPAI) on geopolitics, the economy and industry. He is the Director for European AI Governance at global think tank The Future Society, where he studies and monitors European developments in the legislative framework surrounding AI. His current focus is on the drafting of the EU AI Act and its enforcement mechanisms. Moës also serves as an expert at the OECD AI Policy Observatory in the Working Groups on AI Incidents and on Risk & Accountability and is involved in AI standardization efforts, as a member of the International Standardization Organization’s SC42 and CEN-CENELEC JTC 21 committees on Artificial Intelligence, as a Belgian representative.
. Topic: EU and global efforts to ensure AI safety.
Quote: "The models that Europe’s traditional companies will be using for a while are all foreign. The providers of these foundation models want to push all responsibility for accidents and harms caused by violations of fundamental rights to the downstream i.e. the traditional companies. My message to EU companies is: wake up. If we do not regulate foundation models on the upstream the liability by default will be on Europe’s established companies which is both unfair and counterproductive, economically speaking." |
|
- S T A R T U P O F T H E W E E K - |
|
Vaultree, a 2023 World Economic Forum Technology Pioneer, has developed technology that it says allows companies to perform computations, searches or analytics on data without first needing to decrypt it. The Irish scale-up counts companies in the healthcare, financial services, telcom, and travel sectors among its clients. In October Vaultree presented its technology to Europol, demonstrating how its tech can process encrypted images to allow law enforcement agencies to create collaborative data sets without exposing sensitive information. This week the company presented to South Korea’s leading companies as part of an Irish government delegation. The ability to perform processing on encrypted data has the potential to increase trusted data collaboration and solve major business challenges faced by companies, “We are creating a whole new universe of opportunities,” says Ryan Lamailli, who co-founded the company in 2020 along with Maxim Dressler, Shaun Mc Brearty and Tilo Weigandt. “This applies to every vertical because everyone who generates data and works with data has sensitive data or is exposed to it.” The company, which received a grant from the European Innovation Council in 2022, has been the recipient of multiple awards this year: it was the Gold Winner at the 2023 Cybersecurity Excellence Awards, Editor’s Choice at the Global Infosec awards, named the Most Innovative in Financial Services by Cybertech 100 and was the winner of the City of London’s Cyber Innovation Challenge in August. |
|
- N U M B E R O F T H E W E E K |
|
Amount Google agreed to invest in Anthropic, building on its earlier investment in the artificial-intelligence company and adding fuel to the race between startups trying to achieve the next big breakthrough in the emerging technology. Google invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, people familiar with the matter told The Wall Street Journal. Google’s new deal is the latest one that technology giants have struck with artificial intelligence businesses, which need billions of dollars to train more advanced versions of their AI systems. It reflects the excitement among tech giants to align themselves with promising startups trying to seize upon the success of ChatGPT and develop their own AI-powered audio, text and image technology. |
|
|
|