There seems to be all sorts of advice on the Internet on how to secure IoT devices. When not advertising products that could “protect all your Internet-connected home devices from hackers”, much of the user-targeted advice found online seems to be centered around very basic security advice such as strong passwords, two factor authentication (2FA), software/firmware updates, encryption, firewalls etc., or slightly more advanced approaches such as installing a Unified Threat Management (UTM) appliance (in other words: suggesting that home users buy products targeted at businesses), “protecting the perimeter" (with no accompanying explanation to the user as to what a perimeter is) etc.
While some of this advice is useful to protect users from obvious vulnerabilities/threats, most of this sort of advice might do more harm than good, and lull users into a false sense of security (e.g. you are not automatically safe if you encrypt and/or 2FA all the things). Further, directly transposing security solutions targeted at large IT infrastructures to home users without meaningfully tailoring these solutions to novice users is not a great strategy (e.g. risk assessment is something traditionally performed by security professionals, and expecting a novice user to meaningfully perform it, especially in an interconnected ecosystem like IoT with complex, non-obvious attack paths, is an unfair burden; same with asking home users to use products targeted at enterprise security). In essence, educating users can only go so far in a complex ecosystem such as the IoT, and we are still left with problems such as these:
-
Users might not be able to assess what is relevant/irrelevant information, and meaningfully internalize the security advice that is available online.
-
Users might lack the technical expertise to gauge how the threats/vulnerabilities apply in context.
-
Users might lack the experience to analyze how the information on threats/vulnerabilities/advice fits into the bigger picture (i.e., their overall Personal Threat Model).
So, it looks like we need some sort of framework to address these issues listed above. In my recent op-ed for Ars Technica, I discussed how there currently are no proper mechanisms to help users assess their risk in the IoT ecosystem, and how Personal Threat Models are quite inadequate in this context. In an attempt to reduce the ambiguity around users determining their risk tolerance threshold, I proposed that we define the scope of the problem to “Internet Connected Lifestyle Products” (instead of solving for the entire IoT domain), and that we conceptualize Personal Threat Models as a combination of tangible and intangible factors (e.g. monetary loss is a tangible factor, “unease” over some third party owning/having-access-to data is an intangible factor) to better account for user decisions/behavior behind assessing risk.
More details on how such a user-centric approach to Personal Threat Models could address some fundamental gaps in the status quo can be found in my op-ed, but there might be some other tangential (albeit suboptimal) ways to help users assess risk, and exploring these hypotheses might result in better outcomes than... say, users randomly following some online advice without truly understanding whether that advice might be useful to them specifically, or users buying some product advertised online that claims to make the home IoT devices secure.
Identifying and discussing all the methods that could work in this context is a longer-term research project, but I want to start by quickly describing one such idea. If nothing else, some of this information could be useful to security professionals or journalists to present the risks/implications of a new product in ways that might be more easily internalized by users.
“Priming” users to assess risk by comparing a new product to other IoT products that seem similar:
To help users visualize risk better, giving them information about threats/vulnerabilities/risks of a new internet-connected-lifestyle-product might be more useful if presented in the context of a product they might already be familiar with. This is better than security advice which merely points out potential vulnerabilities of the new product in a generalized manner, and/or reiterates generic best practices without explicit focus on specific threats posed by this new product and their implications.
This might seem quite obvious, but it does not seem to be how a lot of security advice pertaining to Internet-connected-lifestyle-products is being presented, probably because until very recently (and even currently), there were no precedents for some of the security implications posed by some of these internet-connected-lifestyle-products. Yes, there might not always be analogous products, but given how pervasive Internet-connected-lifestyle-products have now become, there is a good chance that a new product might have some analogous attributes to a product that users already understand, and presenting security advice this way wherever we can might at least be marginally more useful.
To begin with, we could establish a high-level comparison between the new product and similar products that the user might already be familiar with.
For example, we can say that Amazon Key, a recent internet-connected-lifestyle-product, is essentially a third-party smart lock combined with an internet-connected camera, along with some mechanism for access/authentication management. So, it would have all the security risks of a smart lock and all the security risks of an internet-connected camera, and this includes the risks associated with improper access/authentication management and third-party assets, along with the additional risks associated with physical access. Enough has been written about the security implications of smart locks, internet-connected cameras, third party products/applications, and improper access/identity management, so the user now has a more meaningful baseline to visualize the potential security implications in context if we describe the product this way (there is still the risk of users not knowing about the products we compared this new product to, or suboptimal security advice on the products that we compared this new product to, but we need to start somewhere — the current approach is still better than users looking up generic IoT-related advice, or presenting a set of vulnerabilities of some new product which the users may or may not internalize; and if we approach every new product this way, we might eventually have some meaningful security advice on most IoT products).
After establishing a high-level baseline comparison like this, it might then be easier to clarify how this new product is any better or worse than the products we compared it to.
For example, in the context of deliveries using Amazon Key, we drew parallels between smart locks and internet-connected cameras, so we would want to point out relative similarities/differences, and associated risks/rewards. One major difference between Amazon Key and smart locks is that in Amazon Key, security is being implemented in combination with auditing, with a prompt from Amazon to the internet-camera (called CloudCam) to start recording to show who enters the house. By introducing auditing, Amazon has made it at least slightly harder for a delivery person to enter the house and do more than deliver packages. On the flipside, the Cloudcam also stores these videos, and there is discussion of it eventually "learning" what things/people belong to your home (like your dog etc.) to differentiate between what/who belongs in your house and what/who doesn’t. This poses an additional risk of someone accessing the stored videos either because of poor access/authentication management or because of a data breach.
At this point the user has enough information, more importantly the right information, to make some deductions about risk.
When you think about it, all we really did here is indirectly point out some attack vectors and attack paths. But when described this way, the user might functionally understand the the security implications better, without necessarily knowing the technical details behind why or how the security compromises occur. In essence, we are comparing similarly good or bad things that might happen in a context, or in other words, “associating” things/situations that are contextually similar.
We could say that, by presenting information this way, we are tricking (although tricking is not exactly the right word here) the user into thinking like someone with more risk assessment experience, without the user actually needing to have that kind of experience or technical knowledge. To put it in formal terms, we are “priming” the user to make potentially better risk assessments. This idea of priming has been previously explored in the context of software developers’ blind spots presenting as vulnerabilities in code, and it seems a useful enough concept to be adapted to this context.
There definitely are other (and potentially more effective) ways to prime users apart from offering comparisons to known products/situations, but all I wanted to convey is that this could be one way to help users assess risk better in the absence of an effective user-centric Personal Threat Model. There are also other suboptimal solutions (in addition to priming) which could be examined in this context… but that is a discussion for a later time.
Priming is imperfect, and is not a replacement to user-centric Personal Threat Models:
Irrespective of how we do it, this approach will not be a self-sufficient solution in the context of user-centric risk assessment in the Internet-connected-lifestyle-product ecosystem — without an informed party pointing users in the right direction, users can still make incorrect comparisons/determinations leading to incorrect deductions about risk, and the issue still boils down to what I discussed in my op-ed: that most users might not have the knowledge or expertise to make proper determinations about their risk and understand the extent of security implications even if the vulnerabilities are presented to them.
In essence, user-centric approach to Personal Threat Models is the larger problem we need to solve, but that doesn’t obviate the usefulness of tangential, shorter term, suboptimal solutions such as what we just discussed.