Risk/reward of chatgpt
Popular AI Program Provides Many Benefits But Could Open Pandora’s Box
By Jeremy Yohe | ALTA’s vice president of communications.
ChatGPT has become a topic of conversation in the title and real estate industry. While many see the benefits of the artificial intelligence (AI) chatbot, others call attention to the drawbacks.
Among the advantages of ChatGPT is the ability to generate content quickly with minimal effort. On the flip side, potential risks of the natural language processing tool include plagiarism, security, privacy and bias.
The pros and cons of ChatGPT, which was released in November 2022 by OpenAI, were the focus of a recent discussion on ALTA Connection.
Anton Tonev, co-founder of InspectHOA, loves ChatGPT and has been using it every day for the past few months.
“It is a great time saver for almost everything that I write—from emails to presentations,” he said. “If you learn how to use it—not just ask generic questions—I don’t see how you would ever go back. It is like having an extremely smart colleague, or better yet, like a huge team of extremely smart colleagues.”
Cheryl Evans of Wicked Title Forum said it’s helpful to draft content for social media posts, ad copy, emails, articles and blog posts. While it creates the content quickly, Evans said it’s important to read what the chatbot generates to correct any mistakes and “make it sound like you.”
“While it feels like talking to a person, it’s not,” Evans said. “You’re talking to a machine. The quality of your input determines the quality of your output. So if you aren’t getting the results you want, it’s not because ChatGPT is bad, it’s because the prompt is wrong. Writing a good prompt is more of an art form than it is a science.”
Andy White, co-founder and CEO of Closinglock, agreed it’s a useful tool and uses it to create a range of things from drafts of documents to generating business analytics queries. However, his main concern with ChatGPT and similar platforms is the risk exposing proprietary of sensitive information.
White posted several items that cause him concern, including:
Bias: ChatGPT is trained on a massive data set of text and code, which means that it can reflect the biases that are present in that data. This can lead to ChatGPT generating text that is offensive, harmful or discriminatory.
Misinformation: ChatGPT can be used to generate text that is factually incorrect or misleading. This can be used to spread misinformation or propaganda.
Privacy: ChatGPT is trained on large amounts of data, which can include personal information. This data could be used to track or identify users.
Security: ChatGPT could be used to create malicious software or to attack computer systems.
“It is important to be aware of the risks associated with ChatGPT and to take steps to mitigate them,” White said. “This includes using ChatGPT responsibly and being aware of the potential for bias, misinformation, privacy and security risks.”
To mitigate these risks, don’t share personal information with ChatGPT, don’t use it to access sensitive data, don’t assume everything it creates is accurate and be aware of text that could be harmful.
“By following these tips, you can help to mitigate the risks of ChatGPT and use it safely and responsibly,” White added.
Sabrina Bier, director of digital media and education at Proper Title, sees the AI chatbot as a powerful tool that can provide benefits that companies are still just beginning to leverage
As an example, she shared the results of a question she posed to ChatGPT:
“What are the top ways for a title company to brand themselves so that Realtors will choose to work with them consistently?”
ChatGPT generated a bulleted list. Bier then asked the chatbot to create a blog post about the second bullet point.
“What it generates is an awesome guide, but you need to edit it to fit your company, brand, market and expertise,” Bier said. “With that said, it saves so much time.”
She also noted that the information doesn’t contain citations, so users must perform their own due diligence to ensure the content is factual.
Cathy Clamp CTIP, NTP, an escrow officer at Heart of Texas Title Co. LLC, also agreed ChatGPT can be a useful and timesaving tool to draft preliminary documents and complicated explanations of title issues.
“Phrasing requirements in a way that is nonthreatening and encouraging to stubborn heirs or warring neighbors can be challenging and (ChatGPT) would be a useful aid,” Clamp said.
But as a content creator, however, she’s largely concerned with plagiarism.
“One of my advanced paralegal certifications is in intellectual property, so I’ve spent a long time reading through the history of the predecessors of ChatGPT and how the AI software was ‘trained’ to create text,” Clamp said. “While imitation is the sincerest form of flattery, as an author and writer, I have to object to the possible use of any text I’ve created over the years that might have appeared in their database to create new articles, stories or even emails. For myself, I’ll keep writing my own emails and webinars. It may take more time, but I can sleep better at night.”
Congress Gets Involved
Concern over the artificial intelligence chatbot has reached Congress, which held a hearing in May to discuss the powerful technology. During the hearing titled “Oversight of A.I.: Rules of Artificial Intelligence” before the Senate Subcommittee on Privacy Technology and the Law, Sam Altman, CEO of the San Francisco start-up OpenAI, urged lawmakers to regulate artificial intelligence.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.
“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Altman continued.
FTC Investigation
The Federal Trade Commission has opened an investigation into ChatGPT to see if the AI tool has harmed people by generating incorrect information about them. In a 20-page letter, the FTC requested that OpenAI turn over records and data on several issues, including company policies and procedures, financial earnings and details of the Large Language Models it uses to train its chatbot. The agency wrote that it’s looking into whether the company has “engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm.”
CFPB Issues Warning About AI Chatbots
Working with customers to resolve a problem or answer a question is an essential function for financial institutions. Customers turn to their financial institutions for assistance with financial products and services and rightfully expect to receive timely, straightforward answers, regardless of the processes or technologies used.
Many financial institutions have deployed chatbots intended to simulate human-like responses and help these organizations reduce the cost of customer service agents. These chatbots sometimes have human names and use popup features to encourage engagement. Some chatbots use more complex technologies—often marketed as “artificial intelligence”—to generate responses to customers.
Earlier this year, the Consumer Financial Protection Bureau (CFPB) released a report highlighting some of the challenges associated with the deployment of chatbots in consumer financial services. As sectors across the economy continue to integrate “artificial intelligence” solutions into customer service operations, there will likely be a number of strong financial incentives to move away from support offered in-person, over the phone and through live chat.
Deficient chatbots that prevent access to live, human support can lead to law violations, diminished service and other harms, according to the CFPB. The shift from relationship banking and to algorithmic banking will have several long-term implications that the CFPB will continue to monitor closely.
Approximately 37% of the United States population is estimated to have interacted with a bank’s chatbot in 2022, a figure that is projected to grow. Among the top 10 commercial banks in the country, all use chatbots of varying complexity to engage with customers. Financial institutions advertise that their chatbots offer a variety of features to consumers, like retrieving account balances, looking up recent transactions and paying bills. Much of the industry uses simple, rule-based chatbots with either decision tree logic or databases of keywords or emojis that trigger preset, limited responses or route customers to frequently asked questions (FAQs). Other institutions have built their own chatbots by training algorithms with real customer conversations and chat logs, like Capital One’s Eno and Bank of America’s Erica. More recently, the banking industry has begun adopting advanced technologies, such as generative chatbots, to support customer service needs.
Financial products and services can be complex, and the information being sought by people shopping for or using those products and services may not be easily retrievable or effectively reduced to an FAQ response. Financial institutions should avoid using chatbots as their primary customer service delivery channel when it is reasonably clear that the chatbot is unable to meet customer needs.
The report found the use of chatbots raised several risks, including:
Noncompliance with federal consumer financial protection laws. Financial institutions run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights or it may fail to protect their privacy and data.
Diminished customer service and trust. When consumers require assistance from their financial institutions, the circumstances could be dire and urgent. Instead of finding help, consumers can face repetitive loops of unhelpful jargon. Consumers also can struggle to get the response they need or be unable to access a human customer service representative. Overall, their chatbot interactions can diminish their confidence and trust in their financial institutions.
Harm to consumers. When chatbots provide inaccurate information regarding a consumer financial product or service, there is potential to cause considerable harm. This could lead the consumer to select the wrong product or service that they need. There could also be an assessment of fees or other penalties should consumers receive inaccurate information on making payments.
What Others Are Doing
Wells Fargo and JPMorgan have placed restrictions on ChatGPT in recent months over concerns that confidential data could be exposed. The Mortgage Bankers Association said it doesn’t have specific guidance around ChatGPT. In a statement, the National Association of Realtors (NAR) said one of its goals is to create policies on AI use and educational materials for Realtors, according to National Mortgage News.
Marki Lemons Ryhal, a licensed managing broker, addressed a group of Realtors during a session at NAR’s 2023 Legislative Meetings titled “The Future of Real Estate: How AI Is Transforming the Industry.” She said more Realtors are using the technology to write copy for listings. But that’s just the beginning, as these AI platforms aren’t going away.
“It took Facebook 10 months to get a million users,” Lemons Ryhal said. “ChatGPT had a million users in five days. By January, there were 100 million users. AI will not replace you. A person using AI will replace you.”
Have You Experimented With ChatGPT?
Redfin and Zillow have incorporated ChatGPT plugins to serve prospective homebuyers. The tools allow consumers to share specific characteristics of their desired property types, including price range, room count and neighborhood amenities. ChatGPT responds with a list of matches drawn from the respective brokers’ databases.
“I think the most powerful way the Redfin ChatGPT plugin can make buying a home easier today is by suggesting homes and communities that would not have been uncovered via a map-based real estate search,” said Ariel Dos Santos, Redfin’s vice president of product.
Brett Beckett, vice president of finance and strategy for Independence Title, believes Chat GPT and similar AI technologies will have a significant impact on the title industry by providing the ability to offload repetitive tasks, allowing their personnel to focus on being experts to the transaction. While there are benefits, Beckett also sees the limitations of the technology and doesn’t believe it will replace human judgment and expertise.
“At the end of the day, we sell trust,” Beckett said. “We sell the fact that when customers bring us a transaction, we will get it done right. We will get the funding right and at the end of the day, there won’t be any issues. I don’t think a completely AI-based title company would succeed because it’s not going to be able to sell trust and build relationships.”
Read full article: Title News - August 2023 - Risk/Reward of ChatGPT
Information is derived from ALTA Title News.
The information contained herein is provided for informational purposes only, and should not be construed as legal advice on any subject matter. University Title of Texas, LLC makes no claim as to its accuracy. You should not act or refrain from acting on the basis of any content herein without seeking legal or other professional advice.