All of a sudden Chat GPT is all the rage. From high school students to professional marketers and everyone in between, the AI-based chatbot has taken the world by storm since it first appeared in November 2022. Its use in the security space hasn’t gotten a lot of press yet, but there are rumblings of how it might be used in a smart building environment — and the potential cyber-risks of doing so.
A recent article in Engineered Systems magazine looked at the use of ChatGPT in the automated building space and asked ChatGPT itself if it could be used in automated buildings. Here was its answer:
ChatGPT: Yes, ChatGPT can be used in automated buildings as a conversational interface for controlling various systems and devices, such as lighting, temperature, security, and entertainment. For example, a user can interact with a chatbot powered by ChatGPT to request changes to the lighting or temperature in a room or to get information about the current status of the building’s systems. The chatbot can then interact with the building’s automation system to carry out the requested actions.
The article compared ChatGPT to an automated Alexa-like tool in a smart building scenario, which could in theory serve to further automate functions.
But there are definitely downsides, as seen in recent incidents experienced in Korea by Samsung. In April it was reported that there were three incidents of entering Samsung Electronics corporate information into ChatGPT, including one where an employee entered source code and asked for a solution, and another who entered program code and requested optimization.
Incidents like these could pose a huge cybersecurity risk. A Corvus webinar discussed the implications of these platforms, including the ability to create more believable phishing content, for example. There are also concerns about employees using ChatGPT for work tasks, as seen in the Samsung examples.
In all cases it is a good idea to develop guidelines of acceptable use for ChatGPT as soon as possible.
“ChatGPT is probably the most exciting thing to come out in recent times,” Bugcrowd Founder and CTO Casey Ellis told Security Magazine. “The unfortunate downside is that an excited user is often also a less cautious one, and one that is more likely to make privacy and security related mistakes, both personally, and on behalf of their employers.”
One good rule of thumb is if you wouldn’t share the content you’re typing into ChatGPT with your direct competitor, then do not share it with ChatGPT, Melissa Bischoping Director of Endpoint Security Research at Tanium said to Security Magazine.
There are even cautions coming from the security integrator side of the industry. In the comments to the SDM 100 survey, Paladin Technologies shared this prediction and concern about the chatbot’s use in marketing: “AI like ChatGPT will disrupt the core interactions of our industry this year. When a buyer is asking you to ‘describe’ a process or a quality program, or even your delivery plan, a marginally passable response can be churned out by these tools in a few keystrokes. Paladin does not condone these tools; however, we suspect they will be used. If you’re going to use AI to support your technical writing, use it as a tool and don’t depend on it. Use AI to inspire content, not to create it. Strategies for direct marketing and cold call emails will also change. … As an end user, how do you distinguish interactions from an AI script? Our industry will be encouraged to leverage and enforce more reliance back to our roots: real human interaction.”
The bottom line: be aware of ChatGPT, set rules for your employees, be on the lookout for more AI-generated phishing and spam, and make sure your vendors are talking with you openly about how they are or aren’t utilizing this new tool.