This is mostly me talking out of my ass. I never really formally studied any of these topics, just some stream of consciousness stuff that I thought of last time our workplace went down from Blaster.
I was at lunch recently with a friend who happens to work with me. We talked about the Von Neumann programming model. Any code that runs on a current processor ends up being put into a finite state Von Neumann machine. Machines have gotten so complex now, with so many services exposed, that though it's in t
>>under 24/7 threat from some bored kid in Mozambique
Doubtful considering it is the poorest country in the world last I checked. That kid would probably be more concerned with where his next meal was coming from rather than being 'bored'.
Boredom is something mostly found in affluent societies.
Doubtful considering it is the poorest country in the world last I checked. That kid would probably be more concerned with where his next meal was coming from rather than being 'bored'.
I don't pretend to know Mozambique enough to say that there are zero people with net access, the inclination, the time, and the skillset to create malicious code. If not, then substitute some other location. Tuvalu may be better; it has some net uinfrastructure and some cash because of the selling of the.tv TLD.
In actuality it doesn't quite matter if it's Mozambique, or Tuvalu, or Russia, or Israel, or anywhere for that matter. It doesn't take a team of people. Slammer killed off the Internet in South Korea and disabled ATMs and some emergency services, and it one damn 376 byte UDP packet. It just takes one individual with the the time and will and connection to do it, and it may come from anywhere.
When I was taking my digital circuits class (basicially creating stuff with discrete logic gates; NAND, XOR, stuff like that) we were introduced to the concept of Don't Care states. Basically, there were certain states of the input that we assumed would never happen, and we could cluster the known states to be more efficient as far as the number of gates. The problem was, what happened if by some reason, we did end up in a bad, assumed it would never happen but here we are state? Unsure, you kind of hope that you eventually get pushed to a valid state. You hope, but you might get some new automata that will switch from the one unexpected state to the next. We then had an assignment to push those illegitimate states back to a valid state in your state machine.
Buffer overflows and stack smashes (though not all errors) are these "assume we know the states going in, but look here we are" states. But I don't think we can use the solution from the above example. The "push all bad states back to the wanted automata" was an example with a small number of states. You can't really do this with a general computer program; you get a large number of states so it's difficult to account for all possible bad states, and the code to push the bad states to a valid state is going to be about as large as the original code with all the same possible bugs.
But there are techniques you can do to help the state stuff. ElectricFence does some more manipulation of the MMU to prevent overwrites. Though used as a debugging tool, it may be useful as a security tool. ProPolice and StackGuard help stop the "new automata". OpenBSD now has W^X protection on architectures that support it. Since you now can't put shell code on the stack, you now need to find it in libraries, and OpenBSD helps there too by randomizing the locations of loaded shared libraries so you can't say, find the code for system() all that easily.
But there still is weird stuff for a system that can be assumed to be under constant attack. Why is the return value for functions on the stack where it is vulnerable to be overwritten? Systems aren't designed for security, and it goes all the way to the chip level. Palladium, or whatever they're calling it these days won't help. It only ensures that code that's initially loaded by the OS is from a known source. It doesn't prevent the unwanted states. I'm sure the RPC code that caused Blaster would have been signed under Palladium.
Random Musings (Score:2)
I was at lunch recently with a friend who happens to work with me. We talked about the Von Neumann programming model. Any code that runs on a current processor ends up being put into a finite state Von Neumann machine. Machines have gotten so complex now, with so many services exposed, that though it's in t
Re:Random Musings (Score:0)
Doubtful considering it is the poorest country in the world last I checked. That kid would probably be more concerned with where his next meal was coming from rather than being 'bored'.
Boredom is something mostly found in affluent societies.
Re:Random Musings (Score:2)
I don't pretend to know Mozambique enough to say that there are zero people with net access, the inclination, the time, and the skillset to create malicious code. If not, then substitute some other location. Tuvalu may be better; it has some net uinfrastructure and some cash because of the selling of the
In actuality it doesn't quite matter if it's Mozambique, or Tuvalu, or Russia, or Israel, or anywhere for that matter. It doesn't take a team of people. Slammer killed off the Internet in South Korea and disabled ATMs and some emergency services, and it one damn 376 byte UDP packet. It just takes one individual with the the time and will and connection to do it, and it may come from anywhere.
When I was taking my digital circuits class (basicially creating stuff with discrete logic gates; NAND, XOR, stuff like that) we were introduced to the concept of Don't Care states. Basically, there were certain states of the input that we assumed would never happen, and we could cluster the known states to be more efficient as far as the number of gates. The problem was, what happened if by some reason, we did end up in a bad, assumed it would never happen but here we are state? Unsure, you kind of hope that you eventually get pushed to a valid state. You hope, but you might get some new automata that will switch from the one unexpected state to the next. We then had an assignment to push those illegitimate states back to a valid state in your state machine.
Buffer overflows and stack smashes (though not all errors) are these "assume we know the states going in, but look here we are" states. But I don't think we can use the solution from the above example. The "push all bad states back to the wanted automata" was an example with a small number of states. You can't really do this with a general computer program; you get a large number of states so it's difficult to account for all possible bad states, and the code to push the bad states to a valid state is going to be about as large as the original code with all the same possible bugs.
But there are techniques you can do to help the state stuff. ElectricFence does some more manipulation of the MMU to prevent overwrites. Though used as a debugging tool, it may be useful as a security tool. ProPolice and StackGuard help stop the "new automata". OpenBSD now has W^X protection on architectures that support it. Since you now can't put shell code on the stack, you now need to find it in libraries, and OpenBSD helps there too by randomizing the locations of loaded shared libraries so you can't say, find the code for system() all that easily.
But there still is weird stuff for a system that can be assumed to be under constant attack. Why is the return value for functions on the stack where it is vulnerable to be overwritten? Systems aren't designed for security, and it goes all the way to the chip level. Palladium, or whatever they're calling it these days won't help. It only ensures that code that's initially loaded by the OS is from a known source. It doesn't prevent the unwanted states. I'm sure the RPC code that caused Blaster would have been signed under Palladium.