Friday, April 2, 2010

Mozilla pegs worldwide Firefox share at 30%

Mozilla estimates that Firefox now handles almost 30 per cent of worldwide web access.

On Wednesday, the open source outfit released its first ever quarterly analyst report (pdf), a collection of web-happy stats dubbed The State of the Internet. Crunching data from four separate online research houses - StatCounter, Quantcast, Net Applications, and Gemius - Mozilla says that its influence is the strongest in Europe, where it spans 39.2 per cent of the browser market.

Next comes South America at 31.1 per cent and then Africa at 29.7 per cent, with North America bringing up the rear at 26 per cent. Mozilla does not provide official numbers on Antarctica, but StatCounter says that on the bottom of the earth, Firefox has an 80 per cent share. Which only makes sense. Open source keeps you warm.

According to Mozilla, Firefox usage is growing most rapidly in Russia, where uptake spiked 20 per cent this quarter. Mozilla guesses this has something to do with chairperson Mitchell Baker's visit to the country in February. Now if we could only get her to visit all those companies still running IE6.

Russia, incidentally, is one place where Google is not the browser's default search engine. All those clicks are going to the native Yandex.

Indonesia, India, the Philippines, Australia, Mexico, and Turkey also showed Firefoxian growth in access of 15 per cent during the quarter. And according to the report, Asians are the most likely to beef up their browsers with add-ons - unless you consider that small sample size in Antarctica. Since January, Mozilla has seen 538 Antarctic add-on downloads from the continent's 1,000 inhabitants.

A recent Mozilla Labs study indicates that the average Firefox user has two to three tabs open at a time. But one unnamed participant went so far as to open 600. Presumably, none of the 600 were running Flash.

Mozilla pegs worldwide Firefox share at 30%

Mozilla estimates that Firefox now handles almost 30 per cent of worldwide web access.

On Wednesday, the open source outfit released its first ever quarterly analyst report (pdf), a collection of web-happy stats dubbed The State of the Internet. Crunching data from four separate online research houses - StatCounter, Quantcast, Net Applications, and Gemius - Mozilla says that its influence is the strongest in Europe, where it spans 39.2 per cent of the browser market.

Next comes South America at 31.1 per cent and then Africa at 29.7 per cent, with North America bringing up the rear at 26 per cent. Mozilla does not provide official numbers on Antarctica, but StatCounter says that on the bottom of the earth, Firefox has an 80 per cent share. Which only makes sense. Open source keeps you warm.

According to Mozilla, Firefox usage is growing most rapidly in Russia, where uptake spiked 20 per cent this quarter. Mozilla guesses this has something to do with chairperson Mitchell Baker's visit to the country in February. Now if we could only get her to visit all those companies still running IE6.

Russia, incidentally, is one place where Google is not the browser's default search engine. All those clicks are going to the native Yandex.

Indonesia, India, the Philippines, Australia, Mexico, and Turkey also showed Firefoxian growth in access of 15 per cent during the quarter. And according to the report, Asians are the most likely to beef up their browsers with add-ons - unless you consider that small sample size in Antarctica. Since January, Mozilla has seen 538 Antarctic add-on downloads from the continent's 1,000 inhabitants.

A recent Mozilla Labs study indicates that the average Firefox user has two to three tabs open at a time. But one unnamed participant went so far as to open 600. Presumably, none of the 600 were running Flash.

Red Hat injects RHEL with new iron love

Red Hat has pushed out another rev of its Linux variant. With Enterprise Linux 5.5, support for the latest processors from Advanced Micro Devices, Intel, and IBM has been back-ported to the Linux 2.6.18 at the heart of the RHEL 5 stack.

According to Tm Burke, vice president of platform engineering at Red Hat, the kernel in RHEL 5.5 has been improved, and it now includes features from more current Linux kernels, so it's not particularly fair to call it Linux 2.6.18. The point is that any application that was certified to run on Linux 2.6.18 or later, possibly many years ago, will work on RHEL 5.5 and still have support for new hardware like the Power7 processors from IBM that debuted back in February, the "Westmere-EP" Xeon 5600s from Intel that came out two weeks ago, the "Magny-Cours" Opteron 6100s from AMD that launched earlier this week, and the "Nehalem-EX" Xeon 7500s that were announced yesterday.

Because machines based on the Opteron 6100s, Xeon 7500, and Power7 processors all use a form of non-uniform memory access (NUMA) memory sharing across multiple processor sockets and also have multiple cores and caches inside sockets, RHEL 5.5 includes a lot of work that makes the operating system very aware of the system topology so memory allocation and job scheduling is done such that instruction streams and their data are placed as close together as possible. The kernel also has tweaks that try to cram as much work on as few cores as possible, allowing for servers to conserve power as they dial down or quiesce cores in the systems.

The updated RHEL also has a lot of I/O optimizations to take advantage of virtual I/O hardware features in the most current x64 and Power processors, which cuts down on I/O overhead in virtualized environments. In I/O-heavy virtualized workloads where the I/O was virtualized in software, rather than on the chip, the I/O overhead could be as high as 30 per cent, which is unacceptable.

Burke says that with these tweaks for I/O virtualization, which include a feature called Single Root I/O Virtualization (SR-IOV), a guest operating system running inside either a Xen or KVM hypervisor embedded in RHEL 5.5 can drive a 10 Gigabit Ethernet adapter card to its saturation point, and Burke claims this is the only hypervisor environment today that can do this. (That won't last for long, with RHEL being open source).

While the freestanding KVM hypervisor at the heart of the Red Hat Enterprise Virtualization, or RHEV, product was updated with a beta of its own 2.2 release using the RHEL 5.5 kernel earlier this week, RHEL 5.5 is available today and supports fatter guest virtual machines. The RHEV 2.2 beta can support 16 virtual CPUs and up to 256 GB of memory per guest, but RHEL 5.5 can support 32 physical processor cores and up to 512 GB of memory on either a Xen or KVM guest.

The bare-metal RHEL 5.5 kernel can support up to 1 TB of physical memory and can support well beyond the current top-end 64 sockets delivered today in eight-way Xeon 7500 systems. The open source community has already figured out how to do 512-core NUMA systems for the Itanium chips and is leveraging this work as x64 architectures get fatter. The RHEL 5 kernel has a stunning theoretical maximum of 32,000 threads that it can support, which is well beyond anything any server maker can put into the field in a single system image. Later this year, IBM's top-end Power 795 systems will have 32 sockets with a total of 256 cores and 1,024 threads.

The largest general-purpose Xeon 7500 machines will have maybe 64-sockets, which means 512 cores and 1,024 threads, and it looks like Itanium 9300 machines will probably top out at 64 sockets as well, but those quad-core chips only have eight threads, so that's a maximum of 512 threads. AMD is topped out at 40 threads in four-socket boxes with the Opteron 6100s. That will barely tickle the limits of RHEL 5.5.

By the way, RHEL 4 is not getting support for all this new iron, since Red Hat stopped doing major backporting to this early RHEL version six months ago. At some point, says Burke, the changes that would be necessary make new hardware work on the older versions without breaking application compatibility would involve way too much work or not be possible at all.

In general, a RHEL version gets three years of cutting-edge hardware support (roughly updated every six months), one year of transitional support where major hardware enablement and driver work is done, but perhaps not the greatest amount of tuning, and then three years where the version is in maintenance mode, with bug fixes and security patches. The expectation is that RHEL 5 will have a couple more years of hardware maintenance, but it really depends on how radical the hardware changes are in the future. If the changes are too radical, RHEL 5 gets sent to pasture sooner.

In addition to the updated hardware support, RHEL 5.5 has pulled in OpenOffice 3.1, which has better compatibility with Microsoft's Office 2007 formats, and the Samba print and file server has been updated to work with Windows 7. The SystemTap dynamic tracing tool that is part of the development stack in RHEL has also been enhanced so it can probe and poke C++ applications, rather than just C apps. The GDB debugger also has better support for C++ applications in that it allows developers to debug one thread at a time instead of having to suspend all threads in C++ code at the same time.

Trojan poses as Adobe update utility

Miscreants have begun creating malware that overwrites software update applications from Adobe and others.

Email malware that poses as security updates from trusted companies is a frequently used hacker ruse. Malware posing as update utilities, rather than individual updates, represents a new take on the ruse.

Vietnam-based anti-virus firm Bkis said the tactic is a logical follow-on from earlier approaches where viruses replace system-files and startup-program files.

Nguyen Minh Duc, director of Bkis Security, writes that the recently detected Fakeupver trojan establishes a backdoor on compromised systems while camouflaging its presence by posing as an Adobe update utility. The malware camouflages itself by using the same icons and version number as the official package.

Variants of the malware also pose as updaters for Java and other software applications.

Duc explains: "From analysis, we found that malware is written in Visual Basic, faking such popular programs as Adobe, DeepFreeze, Java, Windows, etc. In addition, on being executed, they immediately turn on the following services: DHCP client, DNS client, Network share and open port to receive hacker’s commands."

source : theregister.co.uk

Hacker's record credit card theft fetches 20-year sentence

Confessed TJX hacker Albert Gonzalez was sentenced to 20 years in federal prison for orchestrating one of the largest thefts of payment card numbers in history.

The sentence, by US District Court Judge Patti Saris, is the lengthiest to be imposed in a US hacking or identity prosecution. Miami-based Gonzalez was also fined $25,000 and still faces restitution charges that could be in the tens of millions of dollars.

Prosecutors told the judge Gonzalez should receive 25 years because he victimized millions of people and cost banks and their insurers as much as $200m. His attorney, Martin Weinberg, challenged that estimate and presented evidence his client suffered from Asperger's Syndrome, a form of autism.

Last year, Gonzalez pleaded guilty in three separate cases brought in Massachusetts, New Jersey and New York. Thursday's sentence in Boston dealt only with the Massachusetts case. A hearing scheduled for Friday will deal with the other two prosecutions.

Prosecutors said Gonzalez led a gang of hackers who conducted war-driving campaigns that identified retailers with weak wireless networks. They then penetrated those networks and installed sniffer programs that siphoned millions of credit and debit card numbers as they were being zapped to payment processors.

The operation targeted a variety of retailers and restaurants including TJX Cos. and BJ's Wholesale Club, Office Max, Barnes & Noble and Dave & Busters restaurant chain. Thursday's sentence came the same day Dave & Busters agreed to implement a comprehensive security program to settle US Federal Trade Commission charges the restaurant left consumers vulnerable to credit card thieves.

source : theregister.co.uk

Hackers hit where they live

The countries of hackers originating malware-laced spam runs have been exposed by new research, which confirms they are often located thousands of miles away from the compromised systems they use to send out junk mail.

A third of targeted malware attacks sent so far in March came from the United States (36.6 per cent), based on mail server location. However, after the sender's actual location is analysed, more targeted attacks actually began in China (28.2 per cent) and Romania (21.1 per cent) than the US (13.8 percent), according to the March 2010 edition of the monthly MessageLabs security report.

Paul Wood, MessageLabs intelligence senior analyst, explained the discrepancy: “A large proportion of targeted attacks are sent from legitimate webmail accounts which are located in the US and therefore, the IP address of the sending mail server is not a useful indicator of the true origin of the attack.

"Analysis of the sender’s IP address, rather than the IP address of the email server, reveals the true source of these targeted attacks.”

Further analysis of targeted attacks shows people at the sharp end of targeted malware attacks are responsible for foreign trade and defence policy, especially in relation to Asian countries. Virus activity in Taiwan was one in 90.9 emails, making it the most targeted country for email-borne malware in March. By comparison, one in 552 emails sent to US mailboxes came laced with malware.

Meanwhile, one in 77.1 emails sent to public sector mailboxes were blocked as malicious by MessageLabs.

The worldwide ratio of email-borne viruses to regular email traffic was one in 358.3 emails (0.28 per cent) in March, an decrease of 0.05 percentage points since February. In March 16.8 percent of email-borne malware contained links to malicious websites, a big decrease of 13.7 percentage points since February.

Spam rates after connections to known black spots were taken out of the picture reached 90.7 per cent, an increase of 1.5 percentage points since February. The vast majority of these junk mail messages came from compromised malware-infested networks of zombie PCs (aka botnets) MessageLabs reports that 77 per cent of spam sent from the Rustock botnet this month used secure TLS connections.

The average additional inbound and outbound traffic due to TLS requires an overhead of around 1KB, smaller than the average size of spam emails, and putting an added strain on already pressured email servers. Spam sent using TLS accounted for approximately 20 per cent of all junk mail so far in March, peaking at 35 per cent on March 10.

“TLS is a popular way of sending email through an encrypted channel," Wood said. “However, it uses far more server resources and is much slower than plain-text email and requires both inbound and outbound traffic. The outbound traffic frequently outweighs the size of the spam message itself and can significantly tax the workload on corporate email servers.”

source : theregister.co.uk

Microscope-wielding boffins crack cordless phone crypto DECT vivisection

Cryptographers have broken the proprietary encryption used to prevent eavesdropping on more than 800 million cordless phones worldwide, demonstrating once again the risks of relying on obscure technologies to remain secure.

The attack is the first to crack the cipher at the heart of the DECT, or Digital Enhanced Cordless Telecommunications, standard, which encrypts radio signals as they travel between cordless phones in homes and businesses and corresponding base stations. A previous hack, by contrast, merely exploited weaknesses in the way the algorithm was implemented.

The fatal flaw in the DECT Standard Cipher is its insufficient amount of "pre-ciphering," which is the encryption equivalent of shaking a cup of dice to make sure they generate unpredictable results. Because the algorithm discards only the first 40 or 80 bits during the encryption process, it's possible to deduce the secret key after collecting and analyzing enough of the protected conversation.

"This standard, as with everything else we have broken, has been designed some 20 years ago, and it is proprietary encryption," said Karsten Nohl, one of the cryptographers who helped devise the attack. "It relied on the fact that the encryption was unknown and hence could not be broken. This is a case where something that has some potential for being strong is broken by just this one design decision that in any public review would have been spotted immediately."

Nohl, 28, is the same University of Virginia microscope-wielding reverse engineer to crack the encryption in the world's most widely used smartcard. In December, he struck again after devising a practical attack for eavesdropping on cellphone calls.

He and fellow researchers Erik Tews of the Darmstadt University of Technology and Ralf-Philipp Weinmann of the University of Luxembourg, plan to present their findings Monday at the 2010 Fast Software Encryption workshop in Korea.

Like several of Nohl's previous hacks, it began with nitric acid and an electron optical microscope. After dissolving away the epoxy on the silicon chip and then shaving down and magnifying the section dedicated to the DECT encryption, he was able to glean key insights into the underlying algorithm. He then compared the findings against details selectively laid out in a patent and exposed during a debug process.

The results of all three probe methods revealed the fatally insufficient amount of pre-ciphering in the DECT Standard Cipher.

In practical terms, the attack works by collecting bits of the encrypted data stream with known unencrypted contents. In cordless phones, this often comes from a device's control channel, which broadcasts a variety of predictable data, including call duration and button responses. Sniffing an encrypted conversation with a USRP antenna and the average PC, an attacker would need to collect about four hours of data to break the key in typical scenarios.

In others - such as where DECT is used in restaurants and bars to wirelessly zap payment card details - the time needed to crack the key could be dramatically shorter, Nohl said. The time can also be sped up in a variety of other ways, including by adding certain types of graphics cards to beef up the power of the attacking PC. In some cases, the attack can retrieve the secret key in 10 minutes.

"We expect that some smarter cryptographers than ourselves will find better attacks, of course," Nohl told El Reg. "We found the algorithm and then implemented the first attack. It's almost guaranteed that this is not the best attack."

The DECT Forum, the international body that oversees the standard, said it takes the attack scenarios laid out in the paper seriously and "continues to investigate their applicability."

The crack of DECT is only the latest time Nohl has defeated the proprietary encryption of a device with critical mass. His 2008 attack on the Mifare Classic smartcard used similar techniques of filing down a silicon chip and then tracing the connections between transistors. His proposed attack of GSM encryption affects cellphones used by more than 800 carriers in 219 countries.

Open Source Keykeriki Captures Wireless Keyboard Traffic

Another interesting attack, rather than going after the PC/Server this one goes after the data sent by wireless devices such as the wireless keyboards sold by Microsoft. The neat thing is by using a replay attack you could also send rogue inputs to the device.

But then it serves Microsoft right for using XOR encryption for the data-steams, which can very easily be broken using frequency analysis.

Security researchers on Friday unveiled an open-source device that captures the traffic of a wide variety of wireless devices, including keyboards, medical devices, and remote controls.

Keykeriki version 2 captures the entire data stream sent between wireless devices using a popular series of chips made by Norway-based Nordic Semiconductor. That includes the device addresses and the raw payload being sent between them. The open-source package was developed by researchers of Switzerland-based Dreamlab Technologies and includes complete software, firmware, and schematics for building the $100 sniffer.

Keykeriki not only allows researchers or attackers to capture the entire layer 2 frames, it also allows them to send their own unauthorized payloads. That means devices that don’t encrypt communications – or don’t encrypt them properly – can be forced to cough up sensitive communications or be forced to execute rogue commands.

It’ll be interesting to see what other kinds of devices they can successfully use this data capture technique on. Keyboards are one thing, and I’d imagine the transmission range of a wireless keyboard is fairly limited so you or the sniffing device would have to be physically near to the target.

At least Logitech seem to have stepped up the security a bit by using AES-128 for the transmission on their wireless keyboards, but the researchers say they still may be able to crack it due to the way the secret keys are exchanged.

Again most likely not an algorithm problem but an issue with the implementation.

At the CanSecWest conference in Vancouver, Dreamlab Senior Security Expert Thorsten Schroder demonstrated how Keykeriki could be used to attack wireless keyboards sold by Microsoft. The exploit worked because communications in the devices are protected by a weak form of encryption known as xor, which is trivial to break. As a result, he was able to intercept keyboard strokes as they were typed and to remotely send input that executed commands on the attached computer.

“Microsoft made it easy for us because they used their own proprietary crypto,” Schroder said. “Xor is not a very proper way to secure data.”

Even when devices employ strong cryptography, Schroder said Keykeriki may still be able to remotely send unauthorized commands using a technique known as a replay attack, in which commands sent previously are recorded and then sent again.

News time is always fun during conference season due to the fact all these interesting and new attacks and vectors are released for public consumption – generally along with code and examples.

If they can use the same techniques to own more interesting devices with more sensitive data, things could certainly get a little more heated.

source : darknet.org.uk

Automated Scanning vs the OWASP Top Ten

The OWASP Top Ten is a list of the most critical website security flaws – a list also often used as a minimum standard for website vulnerability assessment (VA) and compliance. There is an ongoing industry dialog about the possibility of identifying the OWASP Top Ten in a purely automated fashion (scanning). People frequently ask what can and can’t be found using either white box or black box scanners. This is important because a single missed vulnerability, or more accurately exploited vulnerability, can cause an organization significant financial harm. Proper expectations must be set when it comes to the various vulnerability assessment solutions.

For our part, WhiteHat Security is in the website security business and provides a vulnerability management service. Our Sentinel Service incorporates expert analysis with proprietary scanning technology. Using a black box process, we assess hundreds of websites a month, more than anyone in the industry. What we’ve come to understand is that a significant portion of vulnerabilities are virtually impossible for scanners to find. By the same token, even the most seasoned Web security experts cannot find many issues in a reliable and consistent manner. To achieve full vulnerability coverage and therefore complete vulnerability management, we must rely on a combination and integration of both methods.

We’d like to share some of our experiences that led to this conclusion. Using situations we’ve seen in the real world, and the OWASP Top Ten as a baseline, we’ll demonstrate why scanning technology alone cannot find the OWASP Top Ten. To begin, we’ll focus on a single feature of a fictitious Web Bank responsible for funds transfers from one account to another account. Here is the full URL:

http://server/transfer.cgi?from_acct=1235813&to_acct=31415&amount=
1000.00&session=1001

The “from_acct” is the current user’s account number. “to_acct” is where the money should be sent. “Amount” is obviously the transfer amount, and the “session” is the authenticated session ID after having properly logged-in. This is a fairly typical and straightforward business process.


Unvalidated Input

Scanners must hazard a guess about what “transfer.cgi” does. Otherwise, it would be impossible to determine what it should NOT do.

A website security expert can easily figure this out, but scanners aren’t equipped with that intelligence: There is no knowledge of or appreciation for context. For the sake of discussion, let’s say a scanner has the ability, because there’s a dollar figure present and the “transfer” keyword in the URL might help it decide that this feature moves money. Realistically, these parameter names could be anything and are often far more cryptic. To attempt a classic funds transfer attack, let’s change the above URL substituting the “1000.00” amount to “-1000.00.”



Negative Amount Example:
http://server/transfer.cgi?from_acct=1235813&to_acct=31415&amount=-1000.00&session=1001

By transferring a negative amount, this custom Web application would potentially deduct money from the target account instead of adding to it! The challenge for a scanner is being able to decide whether or not the attack succeeded. How would it tell?

If the fraudulent transfer succeeded, the website might respond with, “Success, would you like to make another transaction?,” “Transfer will take place by 9 AM tomorrow,” “Request received, thank you,” or any number of possible affirmations. If the attack failed, “Transfer failed,” “Error: Transfer amount must be a positive number,” or, “Bank robbery detected, men with guns have been dispatched to your location!” Every custom Web Bank application will likely respond in a different manner. That’s precisely the problem! Pre-programming all the possible keyword phrases or behavioral aspects is simply unfeasible and for all mathematical provability, impossible. However, human gray matter (or, a crack website security operations team) can make this determination.

Automated Scanning vs the OWASP Top Ten

The OWASP Top Ten is a list of the most critical website security flaws – a list also often used as a minimum standard for website vulnerability assessment (VA) and compliance. There is an ongoing industry dialog about the possibility of identifying the OWASP Top Ten in a purely automated fashion (scanning). People frequently ask what can and can’t be found using either white box or black box scanners. This is important because a single missed vulnerability, or more accurately exploited vulnerability, can cause an organization significant financial harm. Proper expectations must be set when it comes to the various vulnerability assessment solutions.

For our part, WhiteHat Security is in the website security business and provides a vulnerability management service. Our Sentinel Service incorporates expert analysis with proprietary scanning technology. Using a black box process, we assess hundreds of websites a month, more than anyone in the industry. What we’ve come to understand is that a significant portion of vulnerabilities are virtually impossible for scanners to find. By the same token, even the most seasoned Web security experts cannot find many issues in a reliable and consistent manner. To achieve full vulnerability coverage and therefore complete vulnerability management, we must rely on a combination and integration of both methods.

We’d like to share some of our experiences that led to this conclusion. Using situations we’ve seen in the real world, and the OWASP Top Ten as a baseline, we’ll demonstrate why scanning technology alone cannot find the OWASP Top Ten. To begin, we’ll focus on a single feature of a fictitious Web Bank responsible for funds transfers from one account to another account. Here is the full URL:

http://server/transfer.cgi?from_acct=1235813&to_acct=31415&amount=
1000.00&session=1001

The “from_acct” is the current user’s account number. “to_acct” is where the money should be sent. “Amount” is obviously the transfer amount, and the “session” is the authenticated session ID after having properly logged-in. This is a fairly typical and straightforward business process.


Unvalidated Input

Scanners must hazard a guess about what “transfer.cgi” does. Otherwise, it would be impossible to determine what it should NOT do.

A website security expert can easily figure this out, but scanners aren’t equipped with that intelligence: There is no knowledge of or appreciation for context. For the sake of discussion, let’s say a scanner has the ability, because there’s a dollar figure present and the “transfer” keyword in the URL might help it decide that this feature moves money. Realistically, these parameter names could be anything and are often far more cryptic. To attempt a classic funds transfer attack, let’s change the above URL substituting the “1000.00” amount to “-1000.00.”



Negative Amount Example:
http://server/transfer.cgi?from_acct=1235813&to_acct=31415&amount=-1000.00&session=1001

By transferring a negative amount, this custom Web application would potentially deduct money from the target account instead of adding to it! The challenge for a scanner is being able to decide whether or not the attack succeeded. How would it tell?

If the fraudulent transfer succeeded, the website might respond with, “Success, would you like to make another transaction?,” “Transfer will take place by 9 AM tomorrow,” “Request received, thank you,” or any number of possible affirmations. If the attack failed, “Transfer failed,” “Error: Transfer amount must be a positive number,” or, “Bank robbery detected, men with guns have been dispatched to your location!” Every custom Web Bank application will likely respond in a different manner. That’s precisely the problem! Pre-programming all the possible keyword phrases or behavioral aspects is simply unfeasible and for all mathematical provability, impossible. However, human gray matter (or, a crack website security operations team) can make this determination.

10 Steps to Protect your Websites from SQL Injection Attacks

Data theft has become so common that the price of a stolen credit card number in the black market has fallen from $10 in 2006 to a few pennies in 2009. Consumers are losing confidence in ecommerce, online banking and other electronic means of doing business. Meanwhile, attackers are devising even more clever ways to steal data and increasing numbers of companies are falling prey to those techniques. Legal and compliance requirements are getting stricter to protect the consumer, but still new incidents are on the rise in 2009. In a recent Verizon Business Data Breach Investigations Report1, studying over 600 incidents in the past five years, SQL Injection was identified as the single largest attack vector responsible for data theft

This finding is not surprising. Given the way Web applications are designed, it is very common for SQL injection attacks to occur without a company’s knowledge. Often, it is only when the credit card companies such as VISA and American Express notify the victimized company, that they learn about the hack and by then, it’s too late.

SQL injection attacks have the potential to cause significant and costly damage to an organization. They are targeted at the database, which stores sensitive information including employee and customer data. This type of attack exploits vulnerabilities in your application and manipulates the SQL queries in the application via input from the Web browser.

In a SQL injection attack, a malicious user can send arbitrary input to the server and trick the Web application into generating a different SQL statement than was originally intended. As a result, the SQL, when executed, fetches a different set of results from the database than the application would have originally requested. SQL injection attacks are most frequently used to gain unauthorized access to, or manipulate the data residing in, the database on the server.

Much has already been written about how SQL injection attacks are performed. The focus here is to prevent the attacks in the first place. Following are 10 steps that both developers and database administrators can take to prevent applications from being vulnerable to SQL injection attacks.

+++

Share |

"make something then You never be lost"

wibiya widget