An AI chatbot designed to support New York residents in starting or maintaining a business has been criticized for dispensing inaccurate advice, potentially leading to illegal activities. The chatbot, launched on March 29, 2024, was intended to streamline business operations but has become a source of misinformation instead.
Many customers reported receiving dubious instructions that can lead to legal problems if followed. The bot appears to be guiding entrepreneurs in the wrong direction, despite its purpose of aligning advice with current laws and regulations. Ongoing investigations aim to rectify the bot’s programming to prevent further spread of false information. Business owners are encouraged to cross-check bot-provided information with a business advisor or legal professional.
The Markup criticized the AI chatbot for repeatedly providing incorrect or incomplete information about housing policy, business rules, and labor rights. The chatbot’s penchant for delivering outdated or over-general advice has led to misunderstandings about local laws and regulations.
A key concern was the bot’s misrepresentation of housing and landlord responsibilities. It also failed to correctly advise about eviction proceedings and misinterpreted the tenant screening processes. The tool’s value is diminishing due to these inaccuracies.
Incorrect advice from AI chatbot misguides entrepreneurs
Also, the chatbot’s erroneous insight into security deposit laws has created confusion about landlord’s financial obligations.
Rosalind Black, Citywide Housing Director at Legal Services NYC, condemned the bot for wrongly informing landlords locking out a tenant is legal and there are no caps on rental pricing. Black pointed out these mistakes could worsen NYC’s housing crisis and leave vulnerable individuals unprotected. According to her, a well-informed understanding of the law is essential to ensuring fair practices. Black urged for the implementation of rigorous fact-checking and continuous amendments to the chatbot to sync with the prevailing housing laws and regulations.
The AI chatbot raised further issues by providing false information about consumer and labor protection policies. It suggested businesses can refuse to take cash, employers are entitled to keep employee tips, and inaccurately stated there are no rules about notifying workers of shift changes. The bot also conveyed wrong information about public transportation, hazardous waste disposal, safety equipment requirements for businesses, permit requirements for public demonstrations, and maintenance requirements for rental properties.
Due to these significant errors, some have suggested halting the bot’s operations if it continues to relay incorrect information. This critically highlights the need for accuracy in deploying public-affecting AI systems. The dissemination of false information can result in inappropriate decision-making and potential harm, emphasizing the urgent requirement for responsible AI use.