New developments in artificial intelligence are unstoppable, despite a call from some experts to step on the brakes, business attorney David Miller said.
“AI is embedded in so many businesses already – from gas compression monitoring to medical advancements,” Miller said. “The problem with the pause is if in the U.S. we find a way to pause it, the capital and labor will move to other places.”
A national debate about whether AI developments are moving too rapidly to be safe was launched last month in an open letter issued by the Future of Life Institute.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” stated the letter.
“The problem with AI is it’s difficult to define it, so it’s difficult to regulate it. Pausing without defining it will be impossible and not beneficial,” said Miller, who practices in Dallas. “We might spend 10 years trying to define AI and by then it may be something else.”
Legislation and regulation of specialized areas like AI usually are defined by the courts, Miller said. In 2022, more than 100 lawsuits were filed involving AI, 10 times more than five years ago, he said. “They will set some parameters as they move through the system.”
Industry associations can set standards for AI use. That would be much quicker but requires cooperation among competitors, Miller said.
“We’re going to be going through some very challenging times,” he said. “It’s a fascinating problem for all of us to deal with.”
The results of a recent global survey show 65% of business and IT executives believe there is data bias in their organization and 78% believe data bias will become a bigger concern as artificial intelligence and machine learning use increases.
“Data Bias: The Hidden Risk of AI” was released by Progress, a company that helps its customers make smart use of data to drive business outcomes. Conducted by the research firm Insight Avenue, the survey was based on interviews with more than 640 business and IT professionals, director level and above, who use data to make decisions and are using or plan to use AI and ML to support their decision-making.
When it comes to AI and ML, the algorithms are only as good as the data used to create them. If data sets are flawed – or worse, biased – incorrect assumptions will be baked into every resulting decision, the report states.
“Every day, bias can negatively impact business operations and decision-making – from governance and lost customer trust to financial implications and potential legal and ethical exposure,” said John Ainsworth, Progress executive vice president and general manager.
Business practices based on biased AI data can have severe consequences for those negatively affected, the survey revealed, citing examples in retail, finance and health care.
One famous retailer found a flawed hiring algorithm exclusively put forth men for open technology roles, excluding otherwise qualified women candidates.
A financial institution found itself wrongly rejecting qualified loan candidates because of a flawed AI tool discriminating by applicant ZIP code.
A company using AI to assign health care eligibility wrongly assigned lower health risk status to Black patients, denying them the proper care they were entitled to and leading to adverse medical outcomes.
In the legal field, Miller sees the benefit of using AI to search relevant case law. He also knows of a half-dozen cases where AI was used to draft a will or a contract that proved to be insufficient, raising the question: Who is responsible – the user or the provider of the document?