The Congressional Bill on AI: What it Means for Enterprises, Vendors, and the Future of AI
With the onset of a world dominated by artificial intelligence (AI), people have been wondering just what the future of AI will be. Many people predict AI will save all human kind, while others suspect it will be the end of human kind. One thing’s for sure, AI is here. For the people who fear for their lives, not just their jobs, there are big questions. Will AI be regulated? Should it be?
Just before the New Year, Congress proposed a set of companion bills that seek to do just that. The Congressional Bill proposes answers in the form of a regulatory committee, known as the FUTURE of AI Committee. Its members, from both public and private sector, will be responsible for exploring any issue associated with AI, including accountability and legal rights issues, workforce impacts, ethics, machine learning bias, and competitive questions like creating investment and innovation in AI.
The bill also distinguishes between artificial general intelligence (AGI) and “narrow” AI, a distinction we often make for our audience:
- AGI has a wide range of skills and knowledge that it self-learns similar to the way humans learn. It is more similar to the all-knowing AI bots of films and TV, being able to answer an enormous breadth of questions and perform many tasks.
- Narrow AI is purpose-driven technology that specializes in performing complex, repetitive tasks that humans can also perform. It is ideal for business processes.
But the bill does not establish a single, all-encompassing definition of Artificial Intelligence, and this will likely be one of the most challenging aspects that the committee will face. If something is not clearly defined, how can you know its potential, both the good and the bad? And how can you regulate it?
This is important particularly for legal issues, such as determining responsibility for violations of laws by an AI system. It is conceivable that AI systems may soon be in positions where there are two choices and both violate some sort of law. Who becomes accountable for the violation and how much so? These types of difficult choices also raise questions of morality and ethics, which the Committee must grapple with as well. Many of these choices are tough for humans because they get into gray areas that often require split-second decision making, much like what the MIT Moral Machine exhibits. But these challenges raise another fundamental question: who should be regulating AI?
Congress, as the United States’ leading lawmaking body, is obviously leading the charge, which makes sense. But is it the right group? More broadly, is there a right group? While Congress does seem to have included industry leaders and so-called AI experts in this initial proposal, the huge amount of people affected by AI applications calls for a more in-depth conversation.
For example, let’s think about an AI self-driving car. Not only will the laws affect the passenger or owner of the car, but they will also affect other drivers on the road, bystanders, the company that makes the car, and the engineers behind it. Now let’s say that you drive, or more likely tell your car to drive, from the US to Canada. Now we’ve gone from a simple domestic conversation to an international one. And just like that, with a single application of AI, we can see the breadth of people potentially affected by one self-driving car.
We cannot think about the future regulations solely from the standpoint of the ethical consequences but we must also think about the technology. We have seen from some technology moguls, such as Elon Musk and Bill Gates, that not everyone thinks artificial intelligence is without its dangers. These warnings of the potential negative consequences of AI, especially from well-known industry names, have sparked a certain amount of anxiety in the public. Much of this fear comes from the fact that, as mentioned, the future of AI feels like a mystery. However, as we have written before, the future of AI is essentially digital smartness running behind everything; in other words, it will transform and disrupt our society in a mostly boring way—improving business processes and streamlining our everyday lives.
In other words, AI is not scary, and should not feel scary. In fact, many of the tried-and-true approaches to building AI systems are being scrutinized and, in some cases, thrown out altogether. But even more importantly, perhaps these bills have come about too soon, before there is even a widely accepted definition of what AI actually is. Perhaps this committee will be attempting to regulate technology that is not fully baked—and may not be for many years. Regulations that come too soon could halt progress in the field or alter its course in a damaging way. Time will tell.
So what does this Congressional bill mean for you?
If you work at a large enterprise, this should help to validate any interests you might have in Artificial Intelligence. It should definitely be a priority for your team. If Congress seeks to regulate something, there is a good chance it is or will be an integral ‘something’ in the world. You might want to get moving on introducing it to your business sooner than later, before regulations slow things down.
If you are an AI vendor, you’ll want to read through the proposed bill and identify areas that you may need to be prepared for future regulation. Whether you work on AGI or Narrow AI, you may be required to go through new, additional steps before launching your next big AI product. Perhaps you will write to Congress with additional objectives, questions, or recommendations for the Committee.
As a developer, these are likely issues and questions you have been pondering, evaluating, and answering for many years. Developers already have huge responsibility for the products they create, and that won’t change. But now, you play an important role in how regulations will actually play out in real life—will they work? Will they change behaviors? Will they limit the effectiveness of AI? You will probably be the first to know.
Regulating AI will be an ongoing challenge, and one we may not be ready for yet. Just as many doubted the Digital Age when it truly began in the ‘70s, many dread the AI age and the change it will bring. However, there is no denying that, like the Digital Age, this AI will move humanity forward, creating more jobs in the long run and allowing us to broaden our capabilities like never before. Even now, it is clear that we have come a long way and it would now be difficult to imagine a life without AI.