top of page

Artificial Intelligence

continued

Together, Microsoft, Alphabet, Amazon and Meta Platforms invested over $50 billion on mostly AI capital expenditures in the second quarter of 2024 alone. These huge investments obviously signal a bullish outlook on AI’s capabilities. Indeed, the CEO of Anthropic, an AI public benefit startup founded by two former employees of OpenAI, is convinced that “powerful AI” will exceed human intelligence by 2026. 
 

On the other hand, many people believe the hype surrounding AI is being blown way out of proportion, particularly when it comes to its ability to conquer sensory/motor skills and logical reasoning.
 

In 1988, Hans Moravec, a computer scientist and current adjunct faculty member at the Robotics Institute of Carnegie Mellon University, crystalized the challenge: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, but difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility… Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it.”
 

Daron Acemoglu, a Nobel laureate and economist from MIT, thinks AI will only be able to perform 5 percent of jobs within the next decade. “A lot of money is going to get wasted,” he recently said. “You’re not going to get an economic revolution out of that 5 percent. You need highly reliable information or the ability of these models to faithfully implement certain steps that previously workers were doing. They can do that in a few places with some human supervisory oversight…but in most places they cannot.”

 

On top of these limitations, there is another major problem with AI: energy consumption.

 

< Note: The section below was written before DeepSeek’s new model was released. The fact that DeepSeek-R1 uses less computing power than the existing U.S. models has called everything into question, including energy consumption. >

Jesse Dodge, a senior research analyst at the Allen Institute for AI – a nonprofit AI research institute founded by the late co-founder of Microsoft Paul Allen – says that “one query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes…so, you can imagine with millions of people using something like that every day, that adds up to a really large amount of electricity.”
 

Put another way, research from financial services company Goldman Sachs says that, on average, a “ChatGPT query needs nearly 10 times as much electricity to process as a Google search.”
 

AI already requires thousands of servers plus the cooling equipment that helps them run, all housed in thousands of data centers. These data centers require enormous amounts of electricity to meet the demand. To put it in perspective, the U.S. Department of Energy says one data center can require 50 times the electricity of a traditional office building. Complexes with multiple buildings can use up to 20 times that amount.
 

This is causing major problems. Northern Virginia – known as the world’s internet hub, processing almost 70 percent of global digital traffic – uses electricity at a staggering rate. In fact, PJM Interconnection, the regional grid operator for the area, says the useage is unsustainable without hundreds of miles of new transmission lines and continued energy output from the old-school coal-powered electricity plants that had previously been ordered to shut down because of environmental concerns.
 

Dominion Energy has repeatedly warned they may not be able to keep up with the energy demand sparked by AI. The utility estimates the AI energy demand in Virginia will likely quadruple by 2035 – roughly the same amount of electricity used to power 8.8 million homes. Already, the 50+ data centers Northern Virginia Electric Cooperative serves account for 59 percent of its entire energy demand. By mid-2028, the number of data centers is expected to expand to over 110.
 

The real-world consequences of this new reality are massive. In Google’s 2024 Environmental Report, the company revealed its greenhouse gas emissions have increased by 48 percent over the past five years, due to a surge in data center energy consumption and supply chain emissions. Google’s report warns, “As we further integrate AI into our products, reducing emissions may be challenging.”

Likewise, in its 2024 Environmental Sustainability Report, Microsoft revealed its emissions increased by 29 percent over the past four years because of new data centers “designed and optimized to support AI workloads.” Microsoft also warns that “the infrastructure and electricity needed for these technologies create new challenges for meeting sustainability commitments across the tech sector.”

 

 

 

 

 

These are all significant issues, but the most important conversations we must have are ones about bias, discrimination, consumer privacy, the social/ethical implications of AI, and the legal regulations that we need to govern all of it. It is critical we establish ethical frameworks that ensure AI enhances our global strength and is advantageous for society overall.

 

Regulating AI is tricky because we must balance the many benefits with a large variety of risks – all without stifling AI’s progress. One way to achieve this is for AI to be regulated by regulators within the industries/domains where it’s being utilized (i.e. medical, auto, trading, etc.) as opposed to one central regulator. The good news is that, if we are proactive, we can maintain control over how AI advances instead of being vulnerable to forces beyond our control.

This comes at a time when people across the globe are getting increasingly nervous about AI. A survey from Ipsos, a market research company, shows that, over the last year, 52 percent of the respondents “express nervousness toward AI products and services, marking a 13-percentage point rise from 2022. In America, data from Pew (a nonpartisan American think tank) suggests that 52 percent of Americans report feeling more concerned than excited about AI, rising from 38 percent in 2022.”
 

There are valid reasons for this angst. For example, facial recognition technology has become one of law enforcement’s standard investigative tools. A 2024 report from the U.S. Government Accountability Office (GAO) revealed seven law enforcement agencies within the Departments of Justice (DOJ) and Homeland Security (DHS) – including the FBI and Secret Service – use facial recognition technology to support criminal investigations.
 

In some ways, this sounds like a positive development. Law enforcement agencies used this technology to identify many of the troublemakers that participated in the U.S. Capitol insurrection on January 6th, for example. However, there are legitimate concerns surrounding surveillance technologies, including everything from privacy issues TO concerns over mass surveillance TO abuse of power.
 

Potential abuse of these technologies is particularly alarming for racial/ethnic minorities, many of whom, understandably, fear these technologies and their algorithms may be utilized in a racially biased manner. Research suggests this fear is founded. A study conducted by Georgetown University, for example, found that “the risks of face surveillance are likely to be borne disproportionately by communities of color.” This is a real problem given the GAO found that only three of the seven federal agencies mentioned in its report had policies for – or even guidance on – how to protect civil rights and civil liberties.
 

The good news is that an independent commission was established by the U.S. Congress in 2018 to make recommendations to the president and Congress that “advance the development of Artificial Intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”
 

The National Security Commission on Artificial Intelligence’s final report “presented an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict.”
 

To that end, the GAO has made 35 recommendations to 19 agencies to help ensure the full implementation of federal AI requirements, gathered from executive orders, the Office of Management and Budget (OMB) guidance, and a law regarding the implementation of AI.

bottom of page