AI In The News (January 2025)
Battle of the Giants Starts over AI
X Corporation is suing the State of California over a law passed by the Assembly (AB2655) in an attempt to guarantee that content generated via the use of AI is not deceptive or misleading. There are 5 distinct, multi-faceted requirements the bill places on AI implementations, plus a list of who can file lawsuits (essentially anyone) and 3 tight deadlines for responses. X Corp. has filed a 65-page, 3-point complaint
that complains it is unconstitutional, conflicts with federal law, and inflicts unreasonable and impractical burdens on platforms like and including X. This will be an interesting, highly impactful and probably LONG process.
Additional California Action on AI
In addition, California has taken two other actions regulating AI. The first is a regulation coming out of the California Privacy Protection Agency regarding its use in employment hiring and other decision-making. The regulations require (a) pre-use notice to anyone who might be affected, (b) bias review to eliminate any discrimination based on protected characteristics, (c) opt-out option for applicants to have a qualified human review their credentials, (d) annual cybersecurity audits, and (e) annual risk assessments to ensure protection of private information and define specific reasons for system use.
The second is a bill covering the use of generative AI in healthcare, requiring users to provide either written or oral notice to patients that AI-generated information was just that (with specific content and format requirements), plus “clear instructions on how a patient may get in touch with a human health care provider, employee of the facility, or other appropriate person regarding the message.” (See also the next article.) The requirement does not apply to communications that are generated by AI but are reviewed by a licensed or certified healthcare professional.
AI Agents to Be the “Next Big Thing” in AI?
An “AI agent” is described in a technical newsletter as “AI-fueled software that does a series of jobs for you that a human customer service agent, HR person, or IT help desk employee might have done in the past, although it could ultimately involve any task.” Several different organizations have already released AI agents to assist in shopping, and in searches for hotel reservations, finding household items, recipes, etc.; and OpenAI is expected to launch its version in January. The definition is elusive because different agents are being developed for different purposes. Nevertheless, as noted above, various governmental units are already regulating – or trying to regulate – them, and employers and creators will just have to keep up.
Some Fallout from DOL’s Attempt to Re-Write FLSA
Ohio State University had increased the salaries in November of 306 exempt employees to keep them in exempt status when the DOL raised the thresholds. Now that the action of the DOL has been halted in court, OSU is reversing the raises, which totaled about $2 million. The extra pay disbursed in November and December will not be “clawed back,” however; instead, the original rates will take effect in January. (The university has about 50,000 employees overall.)
Playing Hardball in AI-World
One of the “big boys” in AI development, OpenAI is alleged to have been playing a bit fast and loose with material developed by others. OpenAI is trying to consolidate 8 different copyright suits into one, but they are meeting resistance, at least in part because the plaintiffs are located around the country, not in a single court district.
In the meantime, OpenAI has introduced a new text-to-video generation tool, called Sora. According to a writer specializing in the AI realm, images generated by Sora bear a strong resemblance to existing commercial figures. The conjecture that they may have potential copyright problems is coming to
fruition: both OpenAI and Microsoft are being sued for “allegedly allowing their AI tools to regurgitate licensed code.”
Another dimension of AI implementation is the conjecture that law firms will be using it for case research, in which case they may move from the traditional “billable hour” approach to one of “software as a service” (SAAS). To be continued…
Cybersecurity Still Critical
The number of “bad actors” attempting to undermine both government and private-sector networks is ever-increasing. The government is attempting to assist companies in their efforts to protect their own infrastructure. The Cybersecurity Infrastructure Security Agency (CISA) in DHS conducted a red-team assessment at the request of a contracting organization (unnamed). The subject passed a phishing test successfully, but the CISA operatives discovered an “unused web shell left from a previous Vulnerability Disclosure Program.” They used that to get access to the system and then used that to escalate their privilege (i.e., access throughout the system). Multifactor Authentication that is phishing-resistant was one of the recommendations that came from the report.
Example: North Korean Penetration of US Companies
Along the above line, Reuters has reported that North Korean IT workers who were hired into American companies and non-profits rewarded their employers by stealing their trade secrets and holding them for ransom. The ransom money was then transmitted to North Korea, which used it to purchase weaponry. According to the article, “The U.S. State Department said about 130 North Korean workers got IT jobs at U.S. companies and nonprofits from 2017 to 2023 (using stolen IDs) and generated at least $88 million that Pyongyang used for weapons of mass destruction.” (Part of the $88 million was the wages paid to the workers.) The DOJ has indicted 14 North Koreans who were part of the scheme. One North Korean IT defector told Reuters in November 2023 that he would try to get hired and then create additional fake social media profiles to secure more jobs.