How AI-driven chatbots are posing a cybersecurity threat: what you need to know
December 9, 2019Regardless of how “easy to install” a printer claims to be, you still struggle to get it connected to your WiFi. After doing some googling, you end up at a support site where, through a chat window, you answer questions in order to receive steps to fix the problem.
During this process, did you share your full name? Location? Where you purchased the equipment? Before you started the process, did you verify that the support site you’re using was legitimate?
There are many benefits to AI-driven chatbots. They can save a company time and money by answering basic and common questions to help quickly resolve situations with little to no staff involvement and at a higher volume than a human-driven chat system. As AI, the system can learn from the situations they encounter to continually offer better support.
Along with these benefits, however, come security concerns. Depending on the purpose of the chatbot, protected information may be shared – imagine using an automated chatbot to communicate with your health insurance company or bank. How do you know that what you are sharing is being transmitted in a secure manner? On the receiving end, how does a company ensure that their clients’ information is protected?
From the business perspective, any organization looking to use a chatbot to provide customer support needs to ensure that proper security protocols are in place before releasing a chatbot for public use.
As reported by David Roe in CMSWire, “Ideally, the platform should be tightly integrated with enterprise security systems and seamlessly support authentication, authorization and auditing, and data security mechanisms,” no matter the type of chatbot system being utilized.
In order to keep chatbots secure, it’s important to ensure that access is controlled so that only individuals with appropriate authentication can gain entry. Additionally, security guidelines must be put into place on the chatbot tool, just as they would be for a website, email system, or any other digital medium.
“The simplest precaution is to ensure you're not giving digital assistants unlimited access to sensitive data and carefully monitor/filter what's allowed to pass between these separate systems,” states Roe.
As outlined in the article, other methods to ensure chatbots are secure include:
- Using a developer interface that prevent unsafe practices that may lead to security issues
- Utilize a chatbot built with https protocols
- Audit and report all logins
- Apply strict security guidelines, including those that eliminate hacking methods, such as SQL injection.
From the user perspective, simple checks can help keep you protected.
- Does the website you are using have security in place? Most browsers confirm if a website is secure or not.
- Are you accessing the corporate website? Web search results may return
- Are you careful about what you share? Avoid sharing social security numbers, credit cards, or account numbers when possible.
As long as both users and businesses keep security in mind, AI-driven chatbots will continue to be an effective tool to resolve problems efficiently.
Want to learn about cybersecurity? Capitol offers bachelor's, master's and doctorate degrees in cyber and information security. Many courses are available both on campus and online. To learn more about Capitol’s degree programs, contact admissions@captechu.edu.