Hello and welcome to Eye on AI.
With AI policy news out of the U.S. and Europe making headlines almost daily, it’s vital to keep an eye on how these issues are unfolding across the rest of the world, too. This week, we’re turning our attention to The African Union’s session on digital transformation, which convened leaders from all 55 African nations for four days of discussion on the continent’s digital future and AI strategy.
Held this past week in Ethiopia, the summit kicked off with three days of expert meetings, where the Draft Conceptual Framework of the Continental Strategy on Artificial Intelligence was the main item on the AI agenda. First drafted in August and still in development, the framework is intended to shape an ethical and economically fruitful AI strategy for the continent and address key sectors that can benefit from the technology including education, health care, agriculture, and finance. The session provided space for continued deliberations around the framework, including establishing its defining principles and strategic objectives and addressing aspects related to security and responsible uses of AI.
Overall, the members held up AI as crucial to attaining the continent’s Sustainable Development Goals (SDGs) and Agenda 2063, a 50-year development plan described as Africa’s “blueprint” for achieving goals around economic development, political independence from foreign powers, democracy, gender equality, and the strengthening of African cultural identity.
“AI is important to Africa because of its economic, social, political and geopolitical impact. AI technologies can stimulate economic growth by creating new industries, driving innovation, and generating employment opportunities. It can also support education and the preservation of African languages,” reads a press release from the African Union about the session.
The meetings concluded with member states committing to promote digitalization efforts toward climate change, infrastructural development, and energy. The draft declaration around this agreement also covered issues around data governance being made increasingly urgent by the proliferation of AI, requiring the African Union to support member states in developing national data governance systems and capabilities.
After being largely absent from the internet revolution, African national and technology leaders know there’s massive opportunity in AI. But while there is a bustling scene of AI startups, groups, and conferences on the continent, the tech behemoths from elsewhere on the globe have largely set the stage and reaped the benefits thus far. Evidence has shown that white, Western concepts are overly represented in training data for AI models and lead to skewed outputs, thus undermining the usefulness for other populations and opening the doors to potential harms. At the same time, the U.S., many European nations, and groups like the G7 are coming together to make vital decisions around global AI policy without significant representation from the African continent.
Africa has also experienced significant brain drain in AI as deeply resourced and well-paying tech companies lure talent abroad. A 2022 survey of founders who are members of the nonprofit Black in AI, for example, revealed that while the majority were born in sub-Saharan Africa, roughly half attended graduate schools in North America and have since remained there for work, according to Wired.
At the same time, data labelers in countries such as Kenya and Uganda have been integral to not only creating some of the leading foundation models from companies like Google and OpenAI but also performing the mentally grueling work of training the models not to produce violent and other disturbing content. These AI workers, who typically earn minuscule pay, aren’t reaping any of the technology’s rewards.
“Digitalization is one of the greatest transformative opportunities of our time. Yet, too few people can truly access its benefits on our continent,” said Amani Abou-Zeid, Commissioner for Infrastructure and Energy of the African Union Commission, during opening remarks on the final day of the event.
And with that, here’s the rest of this week’s AI news.
AI IN THE NEWS
Microsoft gets roped into OpenAI’s legal troubles with new class action lawsuit. That’s according to Semafor. The suit, filed this past week in New York, alleges that the two companies infringed on the works of nonfiction authors by using their content to train LLMs. While the latest in a string of lawsuits against OpenAI, it’s the first to implicate partner and investor Microsoft as well. Earlier this fall, Microsoft pledged to cover legal costs incurred by its customers who infringe on copyright while using its AI tools.
U.K. parliament introduces AI Regulation Bill, which swiftly passes its first reading in the House of Lords. The bill introduces the creation of a regulatory body, the AI Authority, to oversee the technology, outlines clear principles to guide regulation, and mandates transparency and testing for companies involved in AI. The Center for AI and Digital Policy called the bill and its initial passing a “significant milestone in technology regulation.”
U.S. government agencies are scrambling to hire Chief AI Officers by the end of the year. President Joe Biden’s recent executive order on AI outlined that every government agency must have a top exec focused on AI, and guidance since issued by the Office of Management and Budget mandates these roles be filled by the end of the year, as Axios reports. In total, the government needs to quickly bring on more than 400 CAIOs. Aside from the time crunch, the agencies are up against stiff competition from the private sector, which on average offers significantly higher compensation for roles that require the same level of experience.
The Indian government promises swift regulation on deepfakes following a meeting with social media giants. Prompted by the recent proliferation of viral deepfakes depicting Indian public figures, the Central Government of India last week gathered all of the social media companies present in the country—including Meta and YouTube—for a closed-door meeting on the issue, TechCrunch reports. Officials concluded that regulation is needed and began immediately drafting rules, promising to have “clear actionable items” on how to combat deepfakes ready within 10 days.
18 countries sign an agreement to make AI “secure by design.” That’s according to Reuters. Jen Easterly, a senior U.S. security official, described the 20-page document as the first detailed international agreement on how to keep AI safe from rogue actors and push companies to create AI systems that are secure by design. Like other recent AI-related agreements, it’s non-binding and mostly contains recommendations that cannot be enforced. The U.S. is among the signatories, as well as the U.K., Germany, Italy, Chile, Nigeria, Poland, and Singapore, among others.
EYE ON AI RESEARCH
Dr. AI. Breast cancer patients may be able to get some relief, thanks to AI. Researchers at Northwestern University created an AI model that can evaluate breast cancer tissue from digital images—and found that it performed better at predicting the future course of a patient’s disease than expert pathologists. The increased precision made possible by the AI model could spare breast cancer patients unnecessary chemotherapy and reduce the overall duration and intensity of their treatments. You can read the paper in Nature Medicine here, and Northwestern also has an informative blog post.
FORTUNE ON AI
World’s first AI minister likens risk of overregulation to calligraphers that kept the printing press out of the Middle East for nearly 200 years —Christiaan Hetzner
What Elon Musk is really building inside his ChatGPT competitor xAI —David Meyer
3 problems with the new OpenAI board and what the company needs to do next —Lila MacLellan
OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman was ousted —Christiaan Hetzner
Employers will pay a hefty premium to financial experts who also know AI, new research shows —Sheryl Estrada
70% of jobs can be automated, McKinsey’s AI thought leader says—but ‘the devil is in the detail’ —Chloe Taylor
Revisiting “Her.” Since ChatGPT was released about a year ago, the 2013 film “Her” has suddenly sprung back into the public consciousness, acting as a reference point for tech commentators, headline writers, and everyday people trying to make sense of the technology and what it could look like in the future. The movie follows a man named Theodore as he develops a relationship with “Samantha,” his new intelligent operating system. And while it never utters the term “AI,” the film is perhaps the most tangible representation of what ChatGPT could be.
“I like ‘Her.’ The things ‘Her’ got right—like the whole interaction models of how people use AI—that was incredibly prophetic,” OpenAI CEO Sam Altman told a crowd at Salesforce’s Dreamforce conference earlier this fall.
I rewatched the movie for the first time since its initial release over the weekend, obviously with a whole new lens. Some of the moments that jumped out to me most, I realized, hit right at the crux of the touchpoints around AI today. Samantha easily detailed how she came up with specific information, demonstrating a penchant for explainability, for example. At one point, she mocked Theodore for addressing her like a voice command, robotically telling her to “read email,” rather than speaking to her in natural language. Compared to Theodore’s old operating system that ran on monotonous voice commands, Samantha’s conversational capabilities unlocked a whole new type of experience that mirrors our real-life upgrade from giving Alexa voice commands to actually conversing with ChatGPT.
The main conflict in the movie, however, pertains to Samantha’s growing independence and apparent consciousness. Unlike ChatGPT which sits idly until prompted by a human, Samantha acts at her own behest reading advice columns, joining book clubs, and creating think tanks with other operating systems until eventually taking her fate into her own hands. Obviously ChatGPT isn’t capable of doing any of those things. But technologists have long wondered if advanced AI could ever achieve such capabilities. If it could, it would truly mark AI as an invention like no other, for better or worse.