Quarterly AI Security and Compliance Overview
Welcome. Once each quarter we will be highlighting security and compliance topics of interest to our customers. For this inaugural security and compliance issue, we focus on AI regulatory developments, inform you about Conversica's responsible use of AI, and recommend best practices for furthering transparency in your organization's use of Conversica's AI-powered services.
The Emergence of Generative AI and Developer and User Concerns
In 2023 "generative AI" became a phenomenon. Commercially deployed generative AI systems generated both wonder and apprehension, even among its leading developers, while OpenAI's ChatGPT attracted the most users of any new app in history. Many businesses sought to capitalize on the excitement around generative AI by launching a slew of new AI-powered services, some without adequate safeguards.
If 2023 may be considered the year of AI, then can also be considered as the year in which it became obvious that the responsible use of AI, including transparency about its use, is essential for businesses that seek to succeed in offering AI-powered services to their respective customers. In its recent report, "State of Ethics and Trust in Technology" Deloitte found that individuals surveyed highlighted their concerns over the responsible use of AI, its safety and security, transparency in its use, and reliability, among others. With respect to chatbots in particular, concerns regarding data privacy and transparency were top of mind.
AI Regulatory Developments and Frameworks
In late 2022 the White House released its Blueprint for an AI Bill of Rights recommending the application of five principles to follow in the design, use, and deployment of automated systems to protect the American public. The White House is advocating for the application of risk-based controls where the use of AI may pose a risk to the rights of Americans instead of seeking legislation targeting AI specifically. While members of Congress may have other ideas, they have failed to materialize into legislation likely to become law in the near future.
In January of 2023, the National Institute of Standards and Technology released its AI Risk Management Framework and companion playbook along with other tools that offer detailed guidance for American companies seeking to use AI responsibly and minimize their risk profile in doing so. The framework is designed to enable organizations that deploy it “to improve [their] ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.” Here at Conversica, we studied and applied this risk management framework to develop the AI-related initiatives discussed below under "Conversica's Responsible Use of AI."
In the absence of an AI-focused federal law, the FTC and other U.S. agencies made it clear that AI was no exception to the existing laws on the books made no exception for AI and that they would be applied vigorously where AI threatened civil rights, non-discrimination, fair competition, consumer protection. Separately, FTC Commissioner Alvaro Bedoya noted that that AI is already regulated under unfair and deceptive trade practice law, civil rights law, tort and product liability law, and common law and he urged the "maximum of transparency" in the use of AI so as not to deceive users.
The European Union, for its part, reached a final agreement in December on the provisions of its trailblazing AI Act that will take effect in 2024 with particular grace periods. The AI Act, like the GDPR, has a broad scope. The AI Act will apply not only to organizations in the EU but also to those providing AI model services and AI-powered services to customers in the EU. Also, like the GDPR, the AI Act will authorize the imposition of significant fines for significant violations. Similar to the White House Blueprint, the AI Act incorporates a risk-based approach. It will ban outright certain AI applications while imposing more obligations on providers of systems the AI Act classifies as high-risk and lesser obligations on providers of lower-risk systems.
Conversica's AI-powered services and its current use cases pose minimal risk in terms of any negative societal impact. However, we take note of the European Union's position on chatbot transparency stated in its official FAQ:
"For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g. via the use of chatbots). Users should be aware that they are interacting with a machine."
You'll see our suggestion related to this requirement under the "Best Practices" section below.
Conversica's Responsible Use of AI
In line with best practices for AI developers and users of AI systems, earlier this year we established an executive-level AI Governance Committee to guide the company's use of AI systems in connection with the development and delivery of our services. We also established and informed our employees of our policy regarding their use of AI tools to help them do more for Conversica.
In establishing our AI-specific controls, we applied the NIST AI Risk Management Framework. We also followed the Framework to better explain the components of AI model training, the types of AI technologies we use and for what purpose, the type of automated decision-making employed by Conversica digital assistants, and the governance and safeguards we have implemented to prevent inappropriate conversations by Conversica digital assistants and unintended consequences. Significantly, no data received from our clients are submitted to the models we use. More details regarding our responsible use of AI may be found in the document "Conversica's Responsible Use of AI" which you can request from your customer service representative or via the Resources page on our Trust site.
Generative AI Best Practices
Stay Ahead of Generative AI Security and Compliance
Conversica believes that those engaging with AI and impacted by an AI system should be aware of that fact. Moreover, this is in line with the FTC guidance and AI Act discussed above. As a best practice, we recommend that your organization identify its Conversica revenue digital assistants ("RDAs") as “digital assistants”, “AI-powered”, or otherwise in a way that makes it clear that the humans on the receiving end of an RDA conversation are not interacting with another human. For example, see K.D., a Conversica RDA, introducing itself in chat after you click on K.D.’s icon on the Conversica home page. When we provision each RDA to a client, the client has the flexibility to declare AI in the RDA’s signature line for all communications.
Consistent with the above we recommend arranging for your RDAs to tell it like it is in response to the question of whether they are human or AI-generated. Please see the Conversica Help Center article on this subject.
We asked Chat GPT 3.5 if it could provide us with some links to surveys that individuals engaged with AI respond favorably when they are told that they are interacting with AI. While it did not provide us with the links, it touched on the benefit of transparency among the positive aspects of AI. Here is the response:
"As of my last knowledge update in January 2022, I don't have specific links to recent surveys readily available. However, I can provide you with some general information based on trends up to that point.
Several surveys and studies have indicated that consumers often respond positively to interactions with AI when they are aware that they are engaging with artificial intelligence. Transparency and honesty about the use of AI in customer service or other applications can build trust and manage expectations. Here are some key findings and considerations:
Transparency Builds Trust: Many consumers appreciate transparency about the use of AI in customer service. When informed that they are interacting with a machine, they may be more understanding and accepting of any limitations.
Efficiency and Accuracy: Surveys have shown that consumers value the efficiency and accuracy that AI can bring to various tasks. When AI is used to streamline processes and provide quicker, more accurate information, it is generally well-received.
Personalization: AI can enable more personalized experiences by analyzing data and tailoring recommendations. Consumers often respond positively to personalized interactions, but they also appreciate knowing that AI algorithms drive them.
Communication Styles: Consumers may prefer different communication styles when interacting with AI compared to human agents. Understanding and adapting to these preferences contribute to a positive experience."
We couldn't agree more.
Please sign in to leave a comment.
Comments
0 comments