キーワード: security, encryption, tampering, secure, microcontroller
A similar version of this article appeared October 5, 2014 on Embedded.
In 1964 International Business Machines (IBM®) announced its System/360. It was by no means the first computer, but it was one of the most popular, with thousands delivered between 1965 and 1978. It was considered state of the art when introduced, with medium-range systems sporting 128KB to 512KB of memory and a throughput of about 0.3MIPS.
The system was large—multiple cabinets contained auxiliary storage, communications equipment, and peripheral components—and required specialized power and cooling. These machines were tended by a highly trained cadre of computer operators, and unless you had a good reason to be there, you did not easily gain access to the “machine room.” These rooms had the very best security: multiple physical locks and humorless men guarding the door.
These machines managed millions of financial transactions, making them a ripe target for criminals wishing to tap that flow of money. But tampering with these machines was virtually impossible. All transactions were secure because the machines themselves were physically secure within glass-walled machine rooms.
Fast-forward fifty years. The smallest bit of personal technology now has computing horsepower and communication capabilities that dwarf the power in that historic IBM machine room. And today this computing power is not just in the most obvious candidates, cell phones and personal computers. Contemporary televisions, sales (POS) terminals, utility meters and thermostats, portable medical devices, and smart kilowatt-hour meters all contain computers and communication facilities. Many of these products directly or indirectly touch a flow of money; all manipulate private, personal data. And did you notice? There is not a security lock or an armed guard in sight.
And that makes them all amazingly attractive targets.
More and more frequently, computer-based systems store valuable data—think about the medical records stored on systems belonging to your health care provider, or your credit card numbers in transit from a retailer to your bank, or the value remaining on a gift card. The systems also manage the flow of valuable commodities—think now about your smart electricity meter or the cable box that manages access to programming. In each of these cases, if an opponent can gain control of the computer that touches this valuable data, they can access your private information, steal your money, or fraudulently access goods and services.
But even if the smart device stores no valuable information and does not control the flow of valuable resources, there is still a concern: the value of the internal intellectual property (IP) that runs the device. If an attacker can reverse engineer the internal programs, they can produce a competing product without expending the money for development. It is important to protect your IP base to avoid giving your competitors an unfair edge.
While remote cyber attacks seem to receive much of the press coverage, many of the most effective attacks use a much simpler method: tampering. Recall the examples above. Someone might be tempted to use an unauthorized authentication card with the cable box to receive free movies. Another might tamper with the electricity meter so that it underreports energy usage and lowers a monthly bill. Yet another might tamper with a credit card reader so that it reveals private card numbers and data. All of these attacks involve someone with physical access to the device using relatively low-tech means to circumvent the security measures embedded in the device.
In this article we look at tampering, not just what we can do about it, but when it makes sense to do nothing.
At a minimum, devices that purport to be secure should be tamper resistant. That is, the designer and manufacturer of the device should take at least minimal steps to deter the curious and the casual hacker. These steps include virtual barricades in the hardware, including using nonstandard fasteners, plastic or metal welds in construction, or glue in assembly. This means, of course, that servicing the device can also become more complicated, but remember that the focus here is on security.
But ultimately it does not matter how difficult you make it to pry open a product. a determined opponent will find a way in. When that happens, there are four possible responses to a tamper attack, all directly related to the value of a secure device and its protected data.
To “repair” the damage, one must replace the device, but presumably at a relatively modest cost compared to the recovery cost if sensitive material had been lost.
So far, we have discussed physical tamper events with a secure device, literally opening it to extract useful information or modifying it to cause a malfunction. But there are more devious ways to breach security. One can tamper with a device without even touching it. How?
Most electronic devices are sensitive to environmental factors, including temperature, humidity, power conditions, or electrical or magnetic fields. Expose an electronic device to environmental conditions beyond its rated limits, and it will likely misbehave—and possibly in ways that benefit the attacker.
How does this work?
There is a common pattern in each of the above instances of what can be called “indirect” tampering: design mistakes in the original equipment.
In the first case of temperature fluctuation, the design error was in the system response to a component failure. When the temperature moved beyond prescribed limits and the device sensed a failure, the designer had the device drop to a debug prompt—which is exactly what the attacker wanted! In the second case of the utility meter the designer failed to anticipate how large, powerful magnets might disrupt meter operation. In the designer’s experience transformers “just work.” The third case of power cycling is interesting because most engineers tend to internalize the state of the device that they are designing or debugging. They know what kind of settling time is required from the time the power switch is turned off to the time the device is ready to turn on again. Knowing this, they treat their systems much more “nicely” than the average consumer might… much less the way a malicious hacker would.
Design engineers typically think about the best-case scenario for their end application. Their focus is delivering a working product, and quickly. The main concern is, how can I make my product work in the shortest time, at the least cost, and with the most robust features? But the attacker is thinking differently. The attacker’s main concern is, how can I cause the product to malfunction so I can get what I want?
There is a lesson here: every team designing a secure product should have at least one person on board who can think like a criminal!
We must accept a reality in today’s electronic world: your opponent will have physical access to your device and will attempt to take advantage of any weaknesses that they discover. There will be no one to stop them from doing this, so what steps can you take to reduce the threat to strengthen the embedded security?
Here are some practical suggestions.
The easy targets in any secure system are the physical points of interface between the system and the rest of the world. By opening the box, by tapping the communication links, or by exposing the device to environmental extremes, the device may be coaxed into giving up its secrets. Then the attacker can take advantage of the malfunction. The job of the designer is to anticipate these attacks, plan for them, and understand the consequences of a successful breach.
So, we know what security really means. With security targets identified, we seal off a product from a physical attack, at least as best we can. We implement a strategy and tactics in the product to thwart the most determined opponents, What is left? Nothing…unless, of course, your product needs to communicate with the outside world. That’s the realm of communications security which we will discuss at length in our next article.