Diminutive XSS Worm Replication ContestA friend pointed this out to me. Evidently the Sla.ckers.org website is hosting a "Diminutive XSS Worm Replication Contest". Their mission: to see who an write a new XSS worm (like the MySpace one, the recent Orkut one, etc).
The goal of the contest is to have a functional web worm in as small a package as possible. From the website:
Okay folks, new small challenge - no prize, just an exercise in programming skill and because I want to see the results. After reading over the XSS worm thread I got to thinking. We haven't, to my knowledge, ever had a diminutive worm writing contest. We've done it for JS injection and for pulling in remote JS but not for worms. You can submit your code to this thread directly (I'd prefer it actually so that others can benefit from what you've done). If that's for some reason not acceptable sent me your code directly and we can figure something out. Either way the winner's code must be posted in this thread. Actual cutoff to submit is Thursday the 10th of January at 7PM GMT.Source: Diminutive XSS Worm Replication Contest, from the sla.ckers.org forums.
Grey Goo hits Second Life
This isn't the first time a worm (self replicating code) has hit a a large online game, and it wont be the last. Via various news outlets (like this BBC story, Slashdot, and the official Second Life blog:
[PST 2:44PM] An attack of self-replicators is causing heavy load on the database, which is in turn slowing down in-world activity. We have isolated the grey goo and are currently cleaning up the grid. We’ll keep you updated as status changes.
This appearantly took them offline for a few hours. (I don't use Second Life or any of these online communities, so all of my information is second hand.)
Grey Goo within Second life, from richardparent.net.
I have to admit, I like the idea of being able to watch the worm infect a world, sort of like a visible germ cloud or something. Way more interesting than looking at traffic stats when things go awry.
And you thought you were safe after SLAMMER, not so, Swarms not Zombies present the greatest risk to our national internet infrastructure
I had a great time at WORM06 in Fairfax last week, and in my scramble to get work done in preparation for a day out of the office Wormblog updates slipped.
This paper comes from a conference on swarm intelligence and security. This is another one of those "worst worm" design papers, but it uses a novel approach: swarm intelligence.
The problem of attacks where sophisticated communities, such as BLACKHAT users, compromised larger and larger number of unsuspecting (and unsuspected) home personal computers in an effort to launch major attacks on both Government and corporate networks will be addressed in this manuscript. We called these attacks "Swarm Attacks", like a "swarm of bees". The Slammer, which is currently the fastest computer worm in recorded history, is an early precursor to this class of threat. Most proposed countermeasures strategies proposed to deal with such attacks, are based primarily on rate detection and limiting algorithms, or the detection of a sudden increased occurrence of "Destination Unreachable" messages in a network. However, we speculate that such strategies will prove ineffective in the future.
In this manuscript we will introduce the basic principles behind the idea of such "Swarm Worms", the nature of the intelligent behavior that emerges, as well as the basic structure required in order to be considered a "swarm worm", based on our definition. In addition, we will present preliminary results on the propagation speeds of one such swarm worm, called the ZachiK worm. We will show that ZachiK is capable of propagating at a rate 2 orders of magnitude faster than similar worms without swarm capabilities while remaining stealthy.
Source: And you thought you were safe after SLAMMER, not so, Swarms not Zombies present the greatest risk to our national internet infrastructure, Fernando C. Colon Osorio and Zachi Kloppman.
Aim For Bot CoordinationA paper from this year's Virus Bulletin conference that explores IM-based botnet communication channels. While not too long (only 3 pages), it highlights some of the attractive features about the AIM protocol Oscar that could be useful for bots.
In the last few years, there has been increasing interest within the virus-writing community in Internet Relay Chat (IRC) based malware, due to the power afforded by the IRC scripting language and the ease of coordinating infected machines from a chat-room type of structure. What has developed is a very modular, open-source sort of threat which is very rapidly adapted to include new functionality and new infection vectors. More recently, there has also been an increase in the number of threats spreading through Instant Messaging (IM) clients, particularly OSCAR-based clients like AOL Instant Messenger (AIM). IRC bots have begun using this functionality to spread, but there is more capability available within OSCAR than is currently being exploited.Source: Aim For Bot Coordination, Lysa Myers, from Virus Bulletin 2006.
As there has also been an increase in the number of bots using Command and Control (C&C) channels that utilize something other than IRC (primarily web-based currently), it stands to reason that there may be a possibility of virus-writers using OSCAR as a means of control. This paper looks to explore the capabilities of OSCAR for being used in C&C scenarios, and what steps could be taken to mitigate this proactively.
SIS Analysis Toolkit
A departure from the normal, boring academic stuff, and actually on to something I've never featured here before (I think): mobile phone malware. The SIS Analysis Toolkit, according to the website, "consists of a base Perl module, SisDump, and a number of perl scripts and utilities useful for analyzing malware." I have to admit I've never looked at mobile phone malware. Surprisingly, it seems to be a growth niche in the past couple of years, from the early days of things like Caribe to more recent SIS malware likeMabir and more, mobile phone malware has been evolving. Most of it seems to target the Symbian60 platform, which is popular with Nokia phones and is a rich mobile computing environment.
I haven't played with these tools (I don't own a Symbian60 phone), but if you're curious about exploring your phone or any of the malware that may be on it, this looks like the right place to start.
Worms of the future: Trying to exorcise the worst
Another "worst case" scenario, but this one has seen a few of the preditions (ie messing with debuggers) come true in the botnet world.
According to [Wikipedia], a worm could be defined as: a self-replicating computer program that does not need to be part of another program to propagate itself. This document is an attempt at predicting the worst possible future of worms, given the current computer science possibilities.
Up to now, we've seen many different kind of worms, each new generation improving on the precedent. The fact is that all such threats, for now, have suffered from a few vulnerabilities that prevented them (much to our relief) from functioning to their full potential. Some have achieved their result to a greater extent than others, but none of them seem to have realised the greatest fear: wreaking havoc on the Internet and on Informations Systems on a global scale (although some have come close).
This document tries to look at these present vulnerabilities from a security point of view (that is, by considering the Confidentiality, Integrity and Availability of worms) and in the next chapter, how to maintain these security requirements throughout the life-span of the worm, that is to say, as long as possible.
Following this, the document then attempts to provide hints on solutions that could be used in defense against new threats.
As it has been pointed out to me, other similar papers exist, one of them being [Warhol]. Surely a nice complementary reading to this paper.
Source: Worms of the future: Trying to exorcise the worst, by Nicolas Stampf.
Intelligent Worms: Searching for PreysAnother paper showing why, at least in theory, a worm that has some roadmap about its victims should be more efficient than one that blindly looks for victims.
Internet worms have been a persistent security threat in recent years since the Morris worm arose in 1988. After the Code Red and Nimda worms were released into the Internet in 2001, the Slammer worm was unleashed with a 376-byte User Datagram Protocol (UDP) packet and infected at least 160,000 computers worldwide on January 25, 2003. Later, the Blaster and Witty worms flooded the Internet in 2003 and 2004, respectively. These active worms caused large parts of the Internet to be temporarily inaccessible, costing both public and private sectors millions of dollars. The frequency and virulence of active-worm outbreaks have been increasing dramatically in the last few years, presenting a significant threat to today's Internet. In this article, we review the prey-searching methods that worms use currently, and may potentially exploit in the future. While reviewing what has been used by worms is doable, predicting what worms may use seems to be prohibitive: There would be million ways for active worms to attack the Internet. We show how mathematics has been playing an important role in providing both a guidance and methodology in studying current and futuristic worm attacks. In particular, we outline how mathematical tools (e.g., epidemic model, statistics, machine learning, and game theory) can be applied in this area.Source: Intelligent Worms: Searching for Preys, by Zesheng Chen and Chuanyi Ji.
Google Search API WormsWorms that search Google to find new victims aren't new. Look at Santy from late 2004, it found vulnerable phpBB sites via Google queries. While web application worms and the idea of a worm that has some target preknowledge to spread is nothing new, the author here suggests that it may be simpler than previously thought. I'm still not convinced.
One of the main disadvantages of all AJAX application is the lack of cross domain request capabilities. In simple words, a web object from one site cannot access another one from a different site. The reason for this security feature is hidden deeply inside every modern browser security sandbox which is responsible for keeping your personal information private and safe.Source: Google Search API Worms on the GNUCITIZEN website.
Web worms can use Google’s infrastructure to propagate. If a malicious mind finds a vulnerability in WordPress for example and this vulnerability allows SQL Injection, a worm may be written to craw blogs in search for this vulnerability and embed itself into everything that is vulnerable. Once a user visits an infected blog the worm starts another cycle.
Worms, Bots and Holy Grails
This post may be long overdue, but I hope it still has value. This post comes out of thinking a lot about the relationship of bots, worms, and the role of Wormblog going forward.
The Worm Problem
The "worm problem" has a couple of facets to it. First and foremost, it's likely that the worm will attempt to propagate so rapidly and so aggressively that it will disrupt normal network operations. Secondly, it can introduce new points of unauthorized access or control in systems that are under your administrative domain. In the wake of Code Red and Nimda, the world woke up and saw the threat that network-aware malware can pose to the Internet's use. Since then, we've seen a dramatic upsurge in interest in worm "solutions" and research. These facets I list above seem to be the main concerns of anyone with an interest in worm research and hoping to apply it to the Internet at large. These are also the concerns that have lead to various commercial worm solutions.
Now that we have a clear problem statement, we can state the goals of worm protection clearly and succinctly. Any solution to "the worm problem" must be able to detect a new piece of autonomously propagating network malware (ie a worm) as fast as possible and be able to selectively identify the traffic caused by it. Optionally, for a real solution, it must enable the operator to selectively kill the malicious agent's traffic. In short, it must identify the worm and stop its propagation before it gets too far. Because network-centric views give you the biggest area of monitoring, and because we often have to think of worms as "zero day" threats (ie using an exploit specifically coded for the worm's use, even if it uses a well known vulnerability), I tend to focus Wormblog on naive, network-centric detection of novel worms.
I've spent a lot of time looking at how to detect worms reliably based on network traffic, and I've even helped implement such a solution in a commercial product (which I'll name in a few minutes). I've gone looking for other such products or techniques that are available, and what I've found is only a few classes of tools to help you detect a worm on the net.
The first, and most common, kind of tool is a host-based agent that relies on signatures, such as a traditional antivirus product. This is probably still the most common method of malware defense, good old desktop AV. In an enterprise setting, this can be a managed solution, so that when new signatures get published they can get pushed out as rapidly as possible to all of the desktop agents. Still, this is a reactive solution.
The second solution, and probably nearly as popular (in terms of coverage) as the first, is network IDS-based detection of malware, usually based on either the exploit payload of a worm or the worm executable's payload. IDS engines like Snort have rulesets for worms, like these Nimda rules, that you can load. But again, they're reactive. Someone has to see the worm to develop a signature.
The third type of detection that is used is commonly referred to as Dark IP or Blackhole monitoring, DarkNet techniques, or a Network Telescope. However, they're all basically the same thing: take some unused address space and have it hit a collector. Run some statistics and some analytics on the data collected and you may be able to detect a worm. You can even use this data for signature generation. Bear in mind that a lot of the Internet is monitored in this way. There isn't a stretch of the net that someone isn't watching.
The type of detection I most favor is based on anomaly detection. For example, a set of hueristics and rules that look for a growing pattern of alerts that can be due to a worm's propagation attempts. Products that apply this sort of analysis include Arbor Networks' Peakflow X, Mazu's Profiler, and Therminator (based on the Lancope engine).
Finally, there's a honeypot-based approach favored by Forescout and their Wormscout product. I haven't seen a lot of this in the field, to be honest, and in most cases a honeypot isn't the most sensible approach to detecting worms quickly. You're relying on the honeypot to selectively see attacks that you may not be able to anticipate (if the exploit or vulnerability isn't know), and to be able to distringuish that from generic background attacks (more on that later). Even then, you still have to honeypot a lot of addresses, either with real or virtual honeypots.
While I tend to focus on a network-centric view of things, I do recognize the value of a combined approach. You can't observe everything from the network, and so host-based approaches will always be needed. You need to capture a sample to analyze, and so a honeypot will be vital. However, I still think that because worms have such a significant network footprint, detecting them by using network traffic and patterns in that traffi cmakes a lot of sense.
The Bot Problem
Now, the bot problem is somewhat different. Because bots usually don't propagate nearly as aggressively as worms have in the past, their threat is not that they'll knock out your network with too much traffic. The big problem is that somewhere, someone else has access to your network and your assets, and that someone is unauthorized. The primary, tractable solution to this problem is quite clear, then: detect bots and block them as fast as possible. Stopping the bots from getting a foothold is a "nice to have", but because they allow for an external party to control the infected machine, most people want to block that communication and then worry about more bots coming online from within their networks.
However, bots will usually rally at some central point for instructions, and that's often how people detect their presence, through monitoring the communications channels. Some people do use IDS signatures for the well known exploits that most bots use, but that's not as unique as looking for a bot's specific actions. There are some Snort signatures to detect bots and their backdoors, but this effort isn't yielding a significant bumper crop of new signatures.
Ultimately, that's still reactive. People have to collect and analyze the bots to identify where they rally (even if it's sped up with sandboxing), and it's still signature-based. The first bot that comes around uses a new exploit will be mised by network signatures. Desktop AV is still the most popular way to detect bots.
The Importance of the Differences
There are a number of key differences between worms and bots, and their associated problems, that have a significant impact on the future of malware detection techniques. These may impact funding choices for future research, and will probably be visible as changes in the literature.
I don't think that anyone disagrees that a proactive solution to the problem is preferred. After all, a shorter gap between malware outbreak and it's detection by customers is still a reactive solution. To that end, I think that I'll keep focusing on naive approaches to malware detection.
Most bots out there are derivatives of common malware families, and why detection isn't based on those common characteristics is beyond me. While I can pick out a Spybot vs an Agobot from within IDA in just a few moments yet most AV engines can't detect a freshly repackaged Spybot is still a mystery to me. Given the pace of bot authorship, and that it wont be slowing down, this sort of lag in detection seems unacceptable to me.
However, I think that the biggest challenge to detecting malware on the network (and using only network traces) comes from the dramatic increase in the amount of background "radiation" on the Internet today compared to 2000 or so. The volume of scans due to infected boxes means that scan alarms on the open Internet are always going off. Most worm detection techniques rely on a clean, "happy" Internet, one where the noise generated by a worm would be unmistakable. Yet, this isn't the case, and rarely do we see people testing their worm detection tools (ie tools that look for TCP RST storms as an indicator of a worm) on the modern, polluted Internet. This is an important aspect of research that needs more attention, especially as many grant-funded worm-detection tools are being completed around this time.
This question, "can your worm detection tool work in the face of a thousand live botnets?" may be moot if the worm problem has really been diminished or gone away. However, naive detection of network threats is still just as important as ever. The good news is that humans will still be needed for the task, and probably more than ever before. And, if you're attracted to grand challenges in computer security, making sense of this mess is a great problem to tackle.
MS06-040 and the Death of the WormA couple of years ago, when a vulnerability like the recently disclosed Microsoft Security Bulletin MS06-040: Vulnerability in Server Service Could Allow Remote Code Execution was released, you figured a worm was not far behind. And not just a basic worm, the kind that can infect hundreds of thousands of machines quickly. After all, we've been expecting that to happen given what we saw in the past with MS05-039 (Zotob, which really was a bot), MS04-011 (Sasser) and MS03-039 (Blaster).
But this is 2006, and people recognize that if you were able to get your code onto hundreds of thousands of systems, you should be able to do something with them. And so we have bots like W32.Wargbot taking advantage of that vulnerability. It didn't spread nearly as aggressively as Blaster did, but it showed that we're beyond simple worms, for whatever reason.
During my haitus, I spent some time wondering if Wormblog was even still needed. It's only been a few years, but it seems like worm detection systems are no longer as high pressure as they were in the past. For one, you have a significant amount of background noise from bots scanning for victims. Also, you have a dramatic slowdown in malcode propagation compres to a couple of years ago. Don't be surprised if you see more botnet stuff on here because of such changes. I think that there's still interesting research going on in worms and not just in bots, and I'll keep digging for it.