Safe House? How Smart Home Devices Pose Digital Security Risks
The impact of the COVID-19 pandemic has altered the way many people live and work around the world, with a significant adjustment having been the move to working from home. The shift in how people do their jobs remotely has opened the door to a host of new possibilities for connecting and collaborating outside of the office. However, this new focus on online connectivity has also created new cybersecurity risks regarding the unauthorized access and use of sensitive information. These threats are heightened by the fact that many Canadians were vulnerable before the pandemic given their use of smart home devices.
What are Smart Home Devices and Voice Assistants?
Smart home devices are Internet-connected devices that enable the remote operation of applications, appliances, and systems connected to the device. They work by responding to the user’s voice commands, employing voice-recognition software commonly referred to as voice assistants. The world’s largest technology companies offer their own voice-recognition software on many of their devices, including Google (Google Assistant), Apple (Siri), Amazon (Alexa), and Microsoft (Cortana).
Smart home devices are marketed for convenience and connectivity: without lifting a finger, the user is able to access information via the Internet or complete specific tasks, such as adjusting the thermostat. Despite the apparent benefits the technology offers, these devices may also increase the vulnerability of your personal information.
The Issue: Data Collection, Human Monitoring, and the Cloud
Voice assistants rely on user voice commands to complete a variety of tasks like placing phone calls, checking the weather, booking a flight, playing music or shopping online. Since voice assistants are activated by the user’s voice, the device’s microphone is always on (and always listening). While technology companies claim the devices only record and transmit information when activated by a voice command, the technology is still in its infancy. This means technological glitches and programming errors could present new threats to your personal information.
For example, where the device mistakes a word or phrase as a voice command, it could inadvertently send a text message or place an order for delivery from your favourite restaurant. People working from home during the COVID-19 pandemic should also exercise caution to ensure potentially private or confidential client information is not inadvertently collected by an active smart home device.
Technology companies claim they only collect and analyze voice commands recorded by their devices to improve the responsiveness and accuracy of their voice assistants. The recorded voice commands are then reviewed by human technicians. This process, generally referred to as human monitoring, is used by all of the major technology companies – and unfortunately, subject to both human and technical error. In 2019, Google admitted that its contractors hired to review voice messages had leaked more than 1,000 private conversations to a Belgian news company. A month later, Apple switched to an opt-in human monitoring program after reports surfaced that they were hiring third-party contractors to review recorded voice messages to improve Siri (and without clear consent of their customers).
However, technology companies analyze more than just the data the smart home device records. Voice assistants passively collect personal information from other connected applications as long as the user grants access. As a result, your calendar, geolocation, and web-browsing history might all tracked by the device. The voice assistant then uses this information to provide targeted information and advertisements based on the profile the assistant creates. For example, your Alexa or Siri may inform you of heavy traffic because it notices an upcoming appointment in your calendar. As more apps and appliances become compatible with voice assistants, more personal information about the user will be passively collected, stored and analyzed by the smart home device.
The way smart home devices respond to your voice commands also raises privacy concerns. Voice assistants rely on cloud computing to understand and answer voice commands. Voice command recordings and user profiles are stored in the cloud, making the data vulnerable to unauthorized third-party access. This risk is heightened by the COVID-19 pandemic forcing people to work from home where sensitive information could be collected by the device and accessible to malicious hackers in the event of a security breach or leak. While none of the big technology companies have announced a privacy breach of their voice assistant databases yet, the current trends suggest it is only a matter of time before security weaknesses in cloud technology are exploited.
The easiest solution to limit these cybersecurity risks is to avoid using a smart home device altogether. However, virtual assistants have become critical for many people working remotely during the pandemic. If you have to use such a device, there are several steps you can take to better protect your personal information:
• Ensure the device includes adjustable privacy settings and features such as the ability to remotely mute the device.
• Research the privacy policies of the different technology companies to understand how they collect and access your private information (particularly when it comes to how they could share your information with third parties).
• Review and delete recorded messages frequently.
• Opt-out of human monitoring.
• Unplug devices that cannot be muted when working from home.
• Keep your device’s software up-to-date.
• Change your passwords frequently.
• Do not let children use virtual assistants unsupervised.
The Cybersecurity and Data Privacy Group at Cox & Palmer is happy to advise organizations and their employees on how best to work remotely – and securely – as the COVID-19 pandemic continues.