Guide: Choosing A Good Domain Name, Things to Keep in Mind

Choosing a domain name for your site is one of the most important steps towards creating the perfect internet presence. If you run an on-line business, picking a name that will be marketable and achieve success in search engine placement is paramount. Many factors must be considered when choosing a good domain name. This article summarizes all the different things to consider before making that final registration step!

Short and Sweet

Domain names can be really long or really short (1 – 67 characters). In general, it is far better to choose a domain name that is short in length. The shorter your domain name, the easier it will be for people remember. Remembering a domain name is very important from a marketability perspective. As visitors reach your site and enjoy using it, they will likely tell people about it. And those people may tell others, etc. As with any business, word of mouth is the most powerful marketing tool to drive traffic to your site (and it’s free too!). If your site is long and difficult to pronounce, people will not remember the name of the site and unless they bookmark the link, they may never return.

Consider Alternatives

Unless a visitor reaches your site through a bookmark or a link from another site, they have typed in your domain name. Most people on the internet are terrible typists and misspell words constantly. If your domain name is easy to misspell, you should think about alternate domain names to purchase. For example, if your site will be called “MikesTools.com”, you should also consider buying “MikeTools.com” and “MikeTool.com”. You should also secure the different top level domain names besides the one you will use for marketing purposes (“MikesTools.net”, “MikesTools.org”, etc.) You should also check to see if there are existing sites based on the misspelled version of the domain name you are considering. “MikesTools.com” may be available, but “MikesTool.com” may be home to a graphic pornography site. You would hate for a visitor to walk away thinking you were hosting something they did not expect.

Also consider domain names that may not include the name of your company, but rather what your company provides. For example, if the name of your company is Mike’s Tools, you may want to consider domain names that target what you sell. For example: “buyhammers.com” or “hammer-and-nail.com”. Even though these example alternative domain names do not include the name of your company, it provides an avenue for visitors from your target markets. Remember that you can own multiple domain names, all of which can point to a single domain. For example, you could register “buyhammers.com”, “hammer-and-nail.com”, and “mikestools.com” and have “buyhammers.com” and “hammer-and-nail.com” point to “mikestools.com”.

Hyphens: Your Friend and Enemy

Domain name availability has become more and more scant over the years. Many single word domain names have been scooped up which it makes it more and more difficult to find a domain name that you like and is available. When selecting a domain name, you have the option of including hyphens as part of the name. Hyphens help because it allows you to clearly separate multiple words in a domain name, making it less likely that a person will accidentally misspell the name.

domain-hyphenFor example, people are more likely to misspell “domainnamecenter.com” than they are “domain-name-center.com”. Having words crunched together makes it hard on the eyes, increasing the likelihood of a misspelling. On the other hand, hyphens make your domain name longer. The longer the domain name, the easier it is for people to forget it altogether. Also, if someone recommends a site to someone else, they may forget to mention that each word in the domain name is separated by a hyphen. If do you choose to leverage hyphens, limit the number of words between the hyphens to three. Another advantage to using hyphens is that search engines are able to pick up each unique word in the domain name as key words, thus helping to make your site more visible in search engine results.

Dot What?

There are many top level domain names available today including .com, .net, .org, and .biz. In most cases, the more unusual the top level domain, the more available domain names are available. However, the .com top level domain is far and away the most commonly used domain on the internet, driven by the fact that it was the first domain extension put to use commercially and has received incredible media attention. If you cannot lay your hands on a .com domain name, look for a .net domain name, which is the second most commercially popular domain name extension.

Long Arm of the Law

Be very careful not to register domain names that include trademarked names. Although internet domain name law disputes are tricky and have few cases in existence, the risk of a legal battle is not a risk worth taking. Even if you believe your domain name is untouchable by a business that has trademarked a name, do not take the chance: the cost of litigation is extremely high and unless you have deep pockets you will not likely have the resources to defend yourself in a court of law. Even stay away from domain names in which part of the name is trademarked: the risks are the same.

Search Engines and Directories

All search engines and directories are different. Each has a unique process for being part of the results or directory listing and each has a different way of sorting and listing domain names.

google-seo

 

Search engines and directories are the most important on-line marketing channel, so consider how your domain name choice affects site placement before you register the domain. Most directories simply list links to home pages in alphabetical order. If possible, choose a domain name with a letter of the alphabet near the beginning (“a” or “b”). For example, “aardvark-pest-control.com” will come way above “joes-pest-control.com”. However, check the directories before you choose a domain name. You may find that the directories you would like be in are already cluttered with domain names beginning with the letter “a”. Search engines scan websites and sort results based on key words. Key words are words that a person visiting a search engine actually search on. Having key words as part of your domain name can help you get better results.

Explained: What “DirectX” really is, How it works

Ever wondered just what that enigmatic name means?

Gaming and multimedia applications are some of the most satisfying programs you can get for your PC, but getting them to run properly isn’t always as easy as it could be. First, the PC architecture was never designed as a gaming platform. Second, the wide-ranging nature of the PC means that one person’s machine can be different from another. While games consoles all contain the same hardware, PCs don’t: the massive range of difference can make gaming a headache. To alleviate as much of the pain as possible, Microsoft needed to introduce a common standard which all games and multimedia applications could follow – a common interface between the OS and whatever hardware is installed in the PC, if you like. This common interface is DirectX, something which can be the source of much confusion.

DirectX is an interface designed to make certain programming tasks much easier, for both the game developer and the rest of us who just want to sit down and play the latest blockbuster. Before we can explain what DirectX is and how it works though, we need a little history lesson.

DirectX history

Any game needs to perform certain tasks again and again. It needs to watch for your input from mouse, joystick or keyboard, and it needs to be able to display screen images and play sounds or music. That’s pretty much any game at the most simplistic level.

Imagine how incredibly complex this was for programmers developing on the early pre-Windows PC architecture, then. Each programmer needed to develop their own way of reading the keyboard or detecting whether a joystick was even attached, let alone being used to play the game. Specific routines were needed even to display the simplest of images on the screen or play a simple sound.

Essentially, the game programmers were talking directly to your PC’s hardware at a fundamental level. When Microsoft introduced Windows, it was imperative for the stability and success of the PC platform that things were made easier for both the developer and the player. After all, who would bother writing games for a machine when they had to reinvent the wheel every time they began work on a new game? Microsoft’s idea was simple: stop programmers talking directly to the hardware, and build a common toolkit which they could use instead. DirectX was born.

How it works

At the most basic level, DirectX is an interface between the hardware in your PC and Windows itself, part of the Windows API or Application Programming Interface. Let’s look at a practical example. When a game developer wants to play a sound file, it’s simply a case of using the correct library function. When the game runs, this calls the DirectX API, which in turn plays the sound file. The developer doesn’t need to know what type of sound card he’s dealing with, what it’s capable of, or how to talk to it. Microsoft has provided DirectX, and the sound card manufacturer has provided a DirectX-capable driver. He asks for the sound to be played, and it is – whichever machine it runs on.

From our point of view as gamers, DirectX also makes things incredibly easy – at least in theory. You install a new sound card in place of your old one, and it comes with a DirectX driver. Next time you play your favourite game you can still hear sounds and music, and you haven’t had to make any complex configuration changes.

Originally, DirectX began life as a simple toolkit: early hardware was limited and only the most basic graphical functions were required. As hardware and software has evolved in complexity, so has DirectX. It’s now much more than a graphical toolkit, and the term has come to encompass a massive selection of routines which deal with all sorts of hardware communication. For example, the DirectInput routines can deal with all sorts of input devices, from simple two-button mice to complex flight joysticks. Other parts include DirectSound for audio devices and DirectPlay provides a toolkit for online or multiplayer gaming.

dx11 Microsoft

DirectX versions

The current version of DirectX at time of writing is DirectX 11.1. This runs on versions of Windows 7 and Windows 8, before that DirectX 9.0 was the most supported version runs from Windows 98 to XP and including Windows Server 2003. It doesn’t run on Windows 95 though: if you have a machine with Windows 95 installed, you’re stuck with the older and less capable 8.0a. Windows NT 4 also requires a specific version – in this case, it’s DirectX 3.0a.

With so many versions of DirectX available over the years, it becomes difficult to keep track of which version you need. In all but the most rare cases, all versions of DirectX are backward compatible – games which say they require DirectX 7 will happily run with more recent versions, but not with older copies. Many current titles explicitly state that they require DirectX 11 or more, and won’t run without the latest version installed. This is because they make use of new features introduced with this version, although it has been known for lazy developers to specify the very latest version as a requirement when the game in question doesn’t use any of the new enhancements. Generally speaking though, if a title is version locked like this, you will need to upgrade before you can play. Improvements to the core DirectX code mean you may even see improvements in many titles when you upgrade to the latest build of DirectX. Downloading and installing DirectX need not be complex, either.

Upgrading DirectX All available versions of Windows come with DirectX in one form or another as a core system component which cannot be removed, so you should always have at least a basic implementation of the system installed on your PC. However, many new games require the very latest version before they work properly, or even at all.

Generally, the best place to install the latest version of DirectX from the dedicated section of the Microsoft Web site, which is found at http://www.microsoft.com/en-in/download/details.aspx?id=35. As we went to press, the most recent build available for general download was DirectX 11.1. You can download either a simple installer which will in turn download the components your system requires as it installs, or download the complete distribution package in one go for later offline installation.

Another good source for DirectX is games themselves. If a game requires a specific version, it’ll be on the installation CD and may even be installed automatically by the game’s installer itself. You won’t find it on magazine cover discs though, thanks to Microsoft’s licensing terms.

games-directx11

Diagnosing problems

Diagnosing problems with a DirectX installation can be problematic, especially if you don’t know which one of the many components is causing your newly purchased game to fall over. Thankfully, Microsoft provides a useful utility called the DirectX Diagnostic Tool, although this isn’t made obvious. You won’t find this tool in the Start Menu with any version of Windows, and each tends to install it in a different place.

The easiest way to use it is to open the Start Menu’s Run dialog, type in “dxdiag” and then click OK. When the application first loads, it takes a few seconds to interrogate your DirectX installation and find any problems. First, the DirectX Files tab displays version information on each one of the files your installation uses. The Notes section at the bottom is worth checking, as missing or corrupted files will be flagged here.

The tabs marked Display, Sound, Music, Input and Network all relate to specific areas of DirectX, and all but the Input tab provide tools to test the correct functioning on your hardware. Finally, the More Help tab provides a useful way to start the DirectX Troubleshooter, Microsoft’s simple linear problem solving tool for many common DirectX issues.

Story of How Computer Viruses Evolved

Like any other field in computer science, viruses have evolved a great deal indeed over the years. In the series of press releases which start today, we will look at the origins and evolution of malicious code since it first appeared up to the present.

Going back to the origin of viruses, it was in 1949 that Mathematician John Von Neumann described self-replicating programs which could resemble computer viruses as they are known today. However, it was not until the 60s that we find the predecessor of current viruses. In that decade, a group of programmers developed a game called Core Wars, which could reproduce every time it was run, and even saturate the memory of other players’ computers. The creators of this peculiar game also created the first antivirus, an application named Reeper, which could destroy copies created by Core Wars.

However, it was only in 1983 that one of these programmers announced the existence of Core Wars, which was described the following year in a prestigious scientific magazine: this was actually the starting point of what we call computer viruses today.

At that time, a still young MS-DOS was starting to become the preeminent operating system worldwide. This was a system with great prospects, but still many deficiencies as well, which arose from software developments and the lack of many hardware elements known today. Even like this, this new operating system became the target of a virus in 1986: Brain, a malicious code created in Pakistan which infected boot sectors of disks so that their contents could not be accessed. That year also saw the birth of the first Trojan: an application called PC-Write.

Shortly after, virus writers realized that infecting files could be even more harmful to systems. In 1987, a virus called Suriv-02 appeared, which infected COM files and opened the door to the infamous viruses Jerusalem or Viernes 13. However, the worst was still to come: 1988 set the date when the “Morris worm” appeared, infecting 6,000 computers.

From that date up to 1995 the types of malicious codes that are known today started being developed: the first macro viruses appeared, polymorphic viruses … Some of these even triggered epidemics, such as Michael Angelo. However, there was an event that changed the virus scenario worldwide: the massive use of the Internet and e-mail. Little by little, viruses started adapting to this new situation until the appearance, in 1999, of Melissa, the first malicious code to cause a worldwide epidemic, opening a new era for computer viruses.

Part 1

This second installment of ‘The evolution of viruses’ will look at how malicious code used to spread before use of the Internet and e-mail became as commonplace as it is today, and the main objectives of the creators of those earlier viruses. Until the worldwide web and e-mail were adopted as a standard means of communication the world over, the main mediums through which viruses spread were floppy disks, removable drives, CDs, etc., containing files that were already infected or with the virus code in an executable boot sector.

When a virus entered a system it could go memory resident, infecting other files as they were opened, or it could start to reproduce immediately, also infecting other files on the system. The virus code could also be triggered by a certain event, for example when the system clock reached a certain date or time. In this case, the virus creator would calculate the time necessary for the virus to spread and then set a date –often with some particular significance for the virus to activate. In this way, the virus would have an incubation period during which it didn’t visibly affect computers, but just spread from one system to another waiting for ‘D-day’ to launch its payload. This incubation period would be vital to the virus successfully infecting as many computers as possible.

One classic example of a destructive virus that lay low before releasing its payload was CIH, also known as Chernobyl. The most damaging version of this malicious code activated on April 26, when it would try to overwrite the flash-BIOS, the memory which includes the code needed to control PC devices. This virus, which first appeared in June 1998, had a serious impact for over two years and still continues to infect computers today.

Because of the way in which they propagate, these viruses spread very slowly, especially in comparison to the speed of today’s malicious code. Towards the end of the Eighties, for example, the Friday 13th (or Jerusalem) virus needed a long time to actually spread and continued to infect computers for some years. In contrast, experts reckon that in January 2003, SQLSlammer took just ten minutes to cause global communication problems across the Internet.

Notoriety versus stealth

For the most part, in the past, the activation of a malicious code triggered a series of on-screen messages or images, or caused sounds to be emitted to catch the user’s attention. Such was the case with the Ping Pong virus, which displayed a ball bouncing from one side of the screen to another. This kind of elaborate display was used by the creator of the virus to gain as much notoriety as possible. Nowadays however, the opposite is the norm, with virus authors trying to make malicious code as discreet as possible, infecting users’ systems without them noticing that anything is amiss.

Part 2

This third installment of ‘The evolution of viruses’ will look at how the Internet and e-mail changed the propagation techniques used by computer viruses.

Internet and e-mail revolutionized communications. However, as expected, virus creators didn’t take long to realize that along with this new means of communication, an excellent way of spreading their creations far and wide had also dawned. Therefore, they quickly changed their aim from infecting a few computers while drawing as much attention to themselves as possible, to damaging as many computers as possible, as quickly as possible. This change in strategy resulted in the first global virus epidemic, which was caused by the Melissa worm.

With the appearance of Melissa, the economic impact of a virus started to become an issue. As a result, users above all companies started to become seriously concerned about the consequences of viruses on the security of their computers. This is how users discovered antivirus programs, which started to be installed widely. However, this also brought about a new challenge for virus writers, how to slip past this protection and how to persuade users to run infected files.

The answer to which of these virus strategies was the most effective came in the form of a new worm: Love Letter, which used a simple but effective ruse that could be considered an early type of social engineering. This strategy involves inserting false messages that trick users into thinking that the message includes anything, except a virus. This worm’s bait was simple; it led users to believe that they had received a love letter.

This technique is still the most widely used. However, it is closely followed by another tactic that has been the center of attention lately: exploiting vulnerabilities in commonly used software. This strategy offers a range of possibilities depending on the security hole exploited. The first malicious code to use this method –and quite successfully were the BubbleBoy and Kakworm worms. These worms exploited a vulnerability in Internet Explorer by inserting HTML code in the body of the e-mail message, which allowed them to run automatically, without needing the user to do a thing.

Vulnerabilities allow many different types of actions to be carried out. For example, they allow viruses to be dropped on computers directly from the Internet such as the Blaster worm. In fact, the effects of the virus depend on the vulnerability that the virus author tries to exploit.

Part 3

In the early days of computers, there were relatively few PCs likely to contain “sensitive” information, such as credit card numbers or other financial data, and these were generally limited to large companies that had already incorporated computers into working processes.

In any event, information stored in computers was not likely to be compromised, unless the computer was connected to a network through which the information could be transmitted. Of course, there were exceptions to this and there were cases in which hackers perpetrated frauds using data stored in IT systems. However, this was achieved through typical hacking activities, with no viruses involved.

The advent of the Internet however caused virus creators to change their objectives, and, from that moment on, they tried to infect as many computers as possible in the shortest time. Also, the introduction of Internet services like e-banking or online shopping brought in another change. Some virus creators started writing malicious codes not to infect computers, but, to steal confidential data associated to those services. Evidently, to achieve this, they needed viruses that could infect many computers silently.

trojan-horse

Their malicious labor was finally rewarded with the appearance, in 1986, of a new breed of malicious code generically called “Trojan Horse”, or simply “Trojan”. This first Trojan was called PC-Write and tried to pass itself off as the shareware version of a text processor. When run, the Trojan displayed a functional text processor on-screen. The problem was that, while the user wrote, PC-Write deleted and corrupted files on the computers’ hard disk.

After PC-Write, this type of malicious code evolved very quickly to reach the stage of present-day Trojans. Today, many of the people who design Trojans to steal data cannot be considered virus writers but simply thieves who, instead of using blowtorch or dynamite have turned to viruses to commit their crimes. Ldpinch.W or the Bancos or Tolger families of Trojans are examples of this.

Part 4

Even though none of them can be left aside, some particular fields of computer science have played a more determinant role than others with regard to the evolution of viruses. One of the most influential fields has been the development of programming languages.

These languages are basically a means of communication with computers in order to tell them what to do. Even though each of them has its own specific development and formulation rules, computers in fact understand only one language called “machine code”.

Programming languages act as an interpreter between the programmer and the computer. Obviously, the more directly you can communicate with the computer, the better it will understand you, and more complex actions you can ask it to perform.

According to this, programming languages can be divided into “low and high level” languages, depending on whether their syntax is more understandable for programmers or for computers. A “high level” language uses expressions that are easily understandable for most programmers, but not so much for computers. Visual Basic and C are good examples of this type of language.

On the contrary, expressions used by “low-level” languages are closer to machine code, but are very difficult to understand for someone who has not been involved in the programming process. One of the most powerful, most widely used examples of this type of language is “assembler”.

In order to explain the use of programming languages through virus history, it is necessary to refer to hardware evolution. It is not difficult to understand that an old 8-bit processor does not have the power of modern 64-bit processors, and this of course, has had an impact on the programming languages used.

In this and the next installments of this series, we will look at the different programming languages used by virus creators through computer history:

– Virus antecessor: Core Wars

As was already explained in the first chapter of this series, a group of programs called Core Wars, developed by engineers at an important telecommunications company, are considered the antecessor of current-day viruses. Computer science was still in the early stages and programming languages had hardly developed. For this reason, authors of these proto-viruses used a language that was almost equal to machine code to program them.

Curiously enough, it seems that one of the Core Wars programmers was Robert Thomas Morris, whose son programmed years later the “Morris worm”. This malicious code became extraordinarily famous since it managed to infect 6,000 computers, an impressive figure for 1988.

– The new gurus of the 8-bits and the assembler language.

The names Altair, IMSAI and Apple in USA and Sinclair, Atari and Commodore in Europe, bring memories of times gone by, when a new generation of computer enthusiasts “fought” to establish their place in the programming world. To be the best, programmers needed to have profound knowledge of machine code and assembler, as interpreters of high-level languages used too much run time. BASIC, for example, was a relatively easy to learn language which allowed users to develop programs simply and quickly. It had however, many limitations.

This caused the appearance of two groups of programmers: those who used assembler and those who turned to high-level languages (BASIC and PASCAL, mainly).

Computer aficionados of the time enjoyed themselves more by programming useful software than malware. However, 1981 saw the birth of what can be considered the first 8-bit virus. Its name was “Elk Cloner”, and was programmed in machine code. This virus could infect Apple II systems and displayed a message when it infected a computer.

Part 5

Computer viruses evolve in much the same way as in other areas of IT. Two of the most important factors in understanding how viruses have reached their current level are the development of programming languages and the appearance of increasingly powerful hardware.

In 1981, almost at the same time as Elk Kloner (the first virus for 8-bit processors) made its appearance, a new operating system was growing in popularity. Its full name was Microsoft Disk Operating System, although computer buffs throughout the world would soon refer to it simply as DOS.

DOS viruses

The development of MS-DOS systems occurred in parallel to the appearance of new, more powerful hardware. Personal computers were gradually establishing themselves as tools that people could use in their everyday lives, and the result was that the number of PCs users grew substantially. Perhaps inevitably, more users also started creating viruses. Gradually, we witnessed the appearance of the first viruses and Trojans for DOS, written in assembler language and demonstrating a degree of skill on the part of their authors.

Far less programmers know assembler language than are familiar with high-level languages that are far easier to learn. Malicious code written in Fortran, Basic, Cobol, C or Pascal soon began to appear. The last two languages, which are well established and very powerful, are the most widely used, particularly in their TurboC and Turbo Pascal versions. This ultimately led to the appearance of “virus families”: that is, viruses that are followed by a vast number of related viruses which are slightly modified forms of the original code.

Other users took the less ‘artistic’ approach of creating destructive viruses that did not require any great knowledge of programming. As a result, batch processing file viruses or BAT viruses began to appear.

Win16 viruses

The development of 16-bit processors led to a new era in computing. The first consequence was the birth of Windows, which, at the time, was just an application to make it easier to handle DOS using a graphic interface.

The structure of Windows 3.xx files is rather difficult to understand, and the assembler language code is very complicated, as a result of which few programmers initially attempted to develop viruses for this platform. But this problem was soon solved thanks to the development of programming tools for high-level languages, above all Visual Basic. This application is so effective that many virus creators adopted it as their ‘daily working tool’. This meant that writing a virus had become a very straightforward task, and viruses soon appeared in their hundreds. This development was accompanied by the appearance of the first Trojans able to steal passwords. As a result, more than 500 variants of the AOL Trojan family designed to steal personal information from infected computers were identified.

Part 6

This seventh edition on the history of computer viruses will look at how the development of Windows and Visual Basic has influenced the evolution of viruses, as with the development of these, worldwide epidemics also evolved such as the first one caused by Melissa in 1999.

While Windows changed from being an application designed to make DOS easier to manage to a 32-bit platform and operating system in its own right, virus creators went back to using assembler as the main language for programming viruses.

Versions 5 and 6 of Visual Basic (VB) were developed, making it the preferred tool, along with Borland Delphi (the Pascal development for the Windows environment), for Trojan and worm writers. Then, Visual C, a powerful environment developed in C for Windows, was adopted for creating viruses, Trojans and worms. This last type of malware gained unusual strength, taking over almost all other types of viruses. Even though the characteristics of worms have changed over time, they all have the same objective: to spread to as many computers as possible, as quickly as possible.

With time, Visual Basic became extremely popular and Microsoft implemented part of the functionality of this language as an interpreter capable of running script files with a similar syntax.

At the same time as the Win32 platform was implemented, the first script viruses also appeared: malware inside a simple text file. These demonstrated that not only executable files (.EXE and .COM files) could carry viruses. As already seen with BAT viruses, there are also other means of propagation, proving the saying “anything that can be executed directly or through an interpreter can contain malware.” To be specific, the first viruses that infected the macros included in Microsoft Office emerged. As a result, Word, Excel, Access and PowerPoint become ways of spreading ‘lethal weapons’, which destroyed information when the user simply opened a document.

Melissa and self-executing worms

The powerful script interpreters in Microsoft Office allowed virus authors to arm their creations with the characteristics of worms. A clear example is Melissa, a Word macro virus with the characteristics of a worm that infects Word 97 and 2000 documents. This worm automatically sends itself out as an attachment to an e-mail message to the first 50 contacts in the Outlook address book on the affected computer. This technique, which has unfortunately become very popular nowadays, was first used in this virus which, in 1999, caused one of the largest epidemics in computer history in just a few days. In fact, companies like Microsoft, Intel or Lucent Technologies had to block their connections to the Internet due to the actions of Melissa.

The technique started by Melissa was developed in 1999 by viruses like VBS/Freelink, which unlike its predecessor sent itself out to all the contacts in the address book on the infected PC. This started a new wave of worms capable of sending themselves out to all the contacts in the Outlook address book on the infected computer. Of these, the worm that most stands out from the rest is VBS/LoveLetter, more commonly known as ‘I love You’, which emerged in May 2000 and caused an epidemic that caused damage estimated at 10,000 million euros. In order to get the user’s attention and help it to spread, this worm sent itself out in an e-mail message with the subject ‘ILOVEYOU’ and an attached file called ‘LOVE-LETTER-FOR-YOU.TXT.VBS’. When the user opened this attachment, the computer was infected.

As well as Melissa, in 1999 another type of virus emerged that also marked a milestone in virus history. In November of that year, VBS/BubbleBoy appeared, a new type of Internet worm written in VB Script. VBS/BubbleBoy was automatically run without the user needing to click on an attached file, as it exploited a vulnerability in Internet Explorer 5 to automatically run when the message was opened or viewed. This worm was followed in 2000 by JS/Kak.Worm, which spread by hiding behind Java Script in the auto-signature in Microsoft Outlook Express, allowing it to infect computers without the user needing to run an attached file. These were the first samples of a series of worms, which were joined later on by worms capable of attacking computers when the user is browsing the Internet.

The never-ending war of Viruses has still too much to evolve.

What is Bandwidth? How Much Bandwidth Is Enough?

This is well written explanation about bandwidth, hopefully you will find it useful.

BandWidth Explained

Most hosting companies offer a variety of bandwidth options in their plans. So exactly what is bandwidth as it relates to web hosting? Put simply, bandwidth is the amount of traffic that is allowed to occur between your web site and the rest of the internet. The amount of bandwidth a hosting company can provide is determined by their network connections, both internal to their data center and external to the public internet.

Network Connectivity

bandwidth-explainedThe internet, in the most simplest of terms, is a group of millions of computers connected by networks. These connections within the internet can be large or small depending upon the cabling and equipment that is used at a particular internet location. It is the size of each network connection that determines how much bandwidth is available. For example, if you use a DSL connection to connect to the internet, you have 1.54 Mega bits (Mb) of bandwidth. Bandwidth therefore is measured in bits (a single 0 or 1). Bits are grouped in bytes which form words, text, and other information that is transferred between your computer and the internet.

If you have a DSL connection to the internet, you have dedicated bandwidth between your computer and your internet provider. But your internet provider may have thousands of DSL connections to their location. All of these connection aggregate at your internet provider who then has their own dedicated connection to the internet (or multiple connections) which is much larger than your single connection. They must have enough bandwidth to serve your computing needs as well as all of their other customers. So while you have a 1.54Mb connection to your internet provider, your internet provider may have a 255Mb connection to the internet so it can accommodate your needs and up to 166 other users (255/1.54).

Traffic

A very simple analogy to use to understand bandwidth and traffic is to think of highways and cars. Bandwidth is the number of lanes on the highway and traffic is the number of cars on the highway. If you are the only car on a highway, you can travel very quickly. If you are stuck in the middle of rush hour, you may travel very slowly since all of the lanes are being used up.

Traffic is simply the number of bits that are transferred on network connections. It is easiest to understand traffic using examples. One Gigabyte is 2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. To put this in perspective, it takes one byte to store one character. Imagine 100 file cabinets in a building, each of these cabinets holds 1000 folders. Each folder has 100 papers. Each paper contains 100 characters – A GB is all the characters in the building. An MP3 song is about 4MB, the same song in wav format is about 40MB, a full length movie can be 800MB to 1000MB (1000MB = 1GB).

If you were to transfer this MP3 song from a web site to your computer, you would create 4MB of traffic between the web site you are downloading from and your computer. Depending upon the network connection between the web site and the internet, the transfer may occur very quickly, or it could take time if other people are also downloading files at the same time. If, for example, the web site you download from has a 10MB connection to the internet, and you are the only person accessing that web site to download your MP3, your 4MB file will be the only traffic on that web site. However, if three people are all downloading that same MP at the same time, 12MB (3 x 4MB) of traffic has been created. Because in this example, the host only has 10MB of bandwidth, someone will have to wait. The network equipment at the hosting company will cycle through each person downloading the file and transfer a small portion at a time so each person’s file transfer can take place, but the transfer for everyone downloading the file will be slower. If 100 people all came to the site and downloaded the MP3 at the same time, the transfers would be extremely slow. If the host wanted to decrease the time it took to download files simultaneously, it could increase the bandwidth of their internet connection (at a cost due to upgrading equipment).

Hosting Bandwidth

In the example above, we discussed traffic in terms of downloading an MP3 file. However, each time you visit a web site, you are creating traffic, because in order to view that web page on your computer, the web page is first downloaded to your computer (between the web site and you) which is then displayed using your browser software (Internet Explorer, Netscape, etc.) . The page itself is simply a file that creates traffic just like the MP3 file in the example above (however, a web page is usually much smaller than a music file).

A web page may be very small or large depending upon the amount of text and the number and quality of images integrated within the web page. For example, the home page for CNN.com is about 200KB (200 Kilobytes = 200,000 bytes = 1,600,000 bits). This is typically large for a web page. In comparison, Yahoo’s home page is about 70KB.

How Much Bandwidth Is Enough?

It depends (don’t you hate that answer). But in truth, it does. Since bandwidth is a significant determinant of hosting plan prices, you should take time to determine just how much is right for you. Almost all hosting plans have bandwidth requirements measured in months, so you need to estimate the amount of bandwidth that will be required by your site on a monthly basis

If you do not intend to provide file download capability from your site, the formula for calculating bandwidth is fairly straightforward:

Average Daily Visitors x Average Page Views x Average Page Size x 31 x Fudge Factor

If you intend to allow people to download files from your site, your bandwidth calculation should be:

[(Average Daily Visitors x Average Page Views x Average Page Size) + (Average Daily File Downloads x Average File Size)] x 31 x Fudge Factor

Let us examine each item in the formula:

Average Daily Visitors – The number of people you expect to visit your site, on average, each day. Depending upon how you market your site, this number could be from 1 to 1,000,000.

Average Page Views – On average, the number of web pages you expect a person to view. If you have 50 web pages in your web site, an average person may only view 5 of those pages each time they visit.

Average Page Size – The average size of your web pages, in Kilobytes (KB). If you have already designed your site, you can calculate this directly.

Average Daily File Downloads – The number of downloads you expect to occur on your site. This is a function of the numbers of visitors and how many times a visitor downloads a file, on average, each day.

Average File Size – Average file size of files that are downloadable from your site. Similar to your web pages, if you already know which files can be downloaded, you can calculate this directly.

Fudge Factor – A number greater than 1. Using 1.5 would be safe, which assumes that your estimate is off by 50%. However, if you were very unsure, you could use 2 or 3 to ensure that your bandwidth requirements are more than met.

Usually, hosting plans offer bandwidth in terms of Gigabytes (GB) per month. This is why our formula takes daily averages and multiplies them by 31.

Summary

Most personal or small business sites will not need more than 1GB of bandwidth per month. If you have a web site that is composed of static web pages and you expect little traffic to your site on a daily basis, go with a low bandwidth plan. If you go over the amount of bandwidth allocated in your plan, your hosting company could charge you over usage fees, so if you think the traffic to your site will be significant, you may want to go through the calculations above to estimate the amount of bandwidth required in a hosting plan.

Android 4.2: A new flavor of Jelly Bean

The latest version of Google’s mobile OS makes a number of evolutionary improvements to its already impressive repertoire — including a new quick settings menu that can be accessed from the notification pull down and support for multiple user profiles. The multiple user support is especially handy for tablets like the new Nexus 10, which are much more likely to be shared, and now offer quick and easy user switching right from the lock screen. If you don’t want to share your tablet, just what’s on it, the new support for Miracast makes will allow you to wirelessly beam movies, games or anything else to a compatible display. The 10-inch tablet UI has also received a slight tweak, moving closer to the design for phones and the Nexus 7, with centered navigation buttons and the notification area up top. It might seem strange for users used to the Honeycomb-style tablet layout, but the new design is much simpler and provides a consistent experience across devices.

Google has also overhauled the photo experience and added Photo Sphere — a 360-degree panoramic shooting mode that captures everything around you. Obviously, you’ll be able to post those shots to Google+, but you’ll also be able to add them to Google Maps, basically creating your own personal Street View. Interestingly, Google has also taken a page from Swype’s playbook, adding “Gesture Typing” to its keyboard. There’s also a new screensaver called Daydream that offers up news, photos and other content when a device is docked or idle.

Perhaps the biggest, and creepiest improvements are to Google Now, which can monitor your Gmail for relevant content such as flight numbers. Hotel and restaurant reservations are now presented as cards, as are packages enroute to your humble abode. The service will even remind you of events you’ve purchased tickets for, essentially making Calendar redundant for a lot of your personal life. For more info check out the source links.

Fast and smooth

We put Android under a microscope, making everything feel fast, fluid, and smooth. With buttery graphics and silky transitions, moving between home screens and switching between apps is effortless, like turning pages in a book.

More reactive and uniform touch responses mean you can almost feel the pixels beneath as your finger moves across the screen. Jelly Bean makes your Android device even more responsive by boosting your device’s CPU instantly when you touch the screen, and turns it down when you don’t need it to improve battery life.

Beam photos and videos

With Android Beam on Jelly Bean you can now easily share your photos and videos with just a simple tap, in addition to sharing contacts, web pages, YouTube videos, directions, and apps. Just touch two NFC-enabled Android devices back-to-back, then tap to beam whatever’s on the screen to your friend.

jelly-bean

A smarter keyboard, now with Gesture Typing

Writing messages on the go is easier than ever with Gesture Typing – just glide your finger over the letters you want to type, and lift after each word. You don’t have to worry about spaces because they’re added automatically for you.

The keyboard can anticipate and predict the next word, so you can finish entire sentences just by selecting suggested words. Power through your messages like never before.

Android’s dictionaries are now more accurate and relevant. With improved speech-to-text capabilities, voice typing on Android is even better. It works even when you don’t have a data connection, so you can type with your voice everywhere you go.

Bottom Line

Would you buy a new phone just because of Jelly Bean 4.2 ? – No.

Are their any cool updates ? – Yes

 

Call of Duty: Black Ops II

This appears to be the defining question informing the direction of developer Treyarch’s latest, Call of Duty: Black Ops II. While large portions of the design conform to the tenets established by prior iterations of the franchise, the unparalleled wealth of gameplay options and brilliant twists on the formula have shaped Black Ops II into the most ambitious and exciting Call of Duty ever made. It occasionally feels like the team might have strayed into territory they’re not quite masters of, but significant tweaks to the multiplayer loadout system, as well as the realization of player agency in the campaign, make this far more than “just another Call of Duty.” This is an evolution.

The campaign narrative jumps between various characters’ perspectives and also in time. The Cold War-era missions follows characters such as Alex Mason and Sgt. Frank Woods from the first Black

An entry into the blockbuster first-person shooter franchise, Call of Duty: Black Ops II brings players back into the shadows for another Black Ops mission assignment. Ops, while the 2025 missions follow Alex’s son, David. All of these soldiers’ fates are intertwined with the villain, Raul Menendez, and his organization Cordis Die. Menendez is the sort of villain you just can’t seem to kill and, consequently, who knows how to hold a grudge. Thing is, he’s not your typical, “I’m evil cause I do bad things,” bad guy. Menendez is a tragic character, a product of imperialist nations’ meddling during the Cold War and a survivor of some truly traumatic experiences.

The story successfully casts Menendez in a light where I’m still not sure how I feel about him. At times I wanted him dead, while at others I felt like he had a right to want revenge. Hell, I even vacillate between agreeing with his end goals. Like the film Inglourious Basterds, Black Ops II becomes less about you and the “good” guys, and more about the motivations and perspective of the villain. The very fact that I’m still thinking about how the story played out — something unprecedented in a Call of Duty campaign — is a testament to the strength of the writing.

A great narrative already makes Black Ops II stand out in the pantheon of Call of Duty campaigns, but where it really sets itself apart is the addition of player choice and consequence. Moments and devices that would otherwise seem irrelevant — like whether you find all of the intel in a level or choose to shoot someone — can come back to haunt you, hurt you or help you. Failing objectives might result in new or more challenging missions rather than a restart screen. It’s a brilliant riff on the traditional Call of Duty campaign design, and, combined with the additional cutscenes that flesh out the story, creates a narrative worth replaying just to see the wildly different moments and endings. Most importantly, choice makes you apart of what you play; it’s not just a story, it’s your story. I may not have found the ending of my first playthrough satisfying because terrible things happened, but I appreciated that it was a direct byproduct of my actions.

You can also see some variance in the available strike missions, which are a new type of campaign level. These stages put you in a squad of soldiers and drones, and then let you choose which asset to control at any given time. Defending installations against enemy assault, escorting a convoy, and rescuing a hostage are some of the endeavors you might undertake. Though you have a team at your command, strike missions are still all about you gunning down foes. Your AI allies are only good at slightly hindering your enemies, so you end up doing the heavy lifting yourself, often while tracking activity on multiple fronts and hopping around to deal with advancing enemies. Having to consider the bigger picture is a nice change of pace for a series that has mostly involved just shooting what’s in front of you, and these missions are a welcome shot in the arm for the familiar campaign pacing.

Of course, familiar as it may be, that pacing is still great. The campaign ebbs and flows as you move through a variety of diverse, detailed environments using an array of powerful weaponry to dispatch your foes, occasionally hopping into a jet or on to a horse for a short jaunt, or manning a missile turret to tame a swarm of hostile drones. A few neat gadgets and surprising gameplay moments satisfy the novelty quotient, but you still get the lingering feeling that you’ve done this all before. The new strike missions, dramatic decision points, and memorable villain help keep this concern at bay, however, and this fiesty, enjoyable romp is more enticing to replay than other recent Call of Duty campaigns.

Black Ops II’s competitive multiplayer has seen some changes as well, notably in the way you equip yourself before going into battle. The COD points system from Black Ops has been ditched in favor of a new token system that still affords you some control over the order in which you unlock new weapons and gear. The more interesting change is the new loadout system, which gives you ten points to play with and assigns a single point to every element of your loadout (guns, attachments, perks, lethal and tactical items). It offers a bit of flexibility if, say, you don’t use a sidearm much but could really use an extra perk, and the new wild cards allow some limited creativity. Put one of these in your loadout, and you can go into battle with two well-equipped primary weapons, or you can load up on perks and bring just a knife and your wits.

The Good

  • Great campaign scripting
  • Story choices are often tough and encourage replay
  • League play offers a new stage for the familiar multiplayer combat.

The Bad

  • Zombies mode is stagnant
  • New codcasting tool is hamstrung.

THE VERDICT

The team at Treyarch could have played it safe and Black Ops II would have sold well, but instead they challenged assumptions and pushed the series forward in awesome new directions. It’ll be hard to return to a campaign where I don’t have the ability to shape it, and I simply can’t imagine going back to the old loadout system now that Pick 10 exists. Combined with the host of subtle and overt improvements to the array of other systems, the additions to make it more appealing to Esports, and the more fleshed out Zombies mode, this is not just a fantastic Call of Duty game, but one of the best shooters of the last decade.

Mobile phone tower radiation, a cause of concern ?

For the past couple of months, apart from the scams, what’s also making news is the issue of mobile phone tower radiation. Recently new radiation norms were adopted by India and the Department of Telecommunication (DoT) had set September 1 as the deadline for the telecom operators to adhere to them. As per the new norms, the operators were mandated to reduce the radiation levels by 1/10th of the current levels, thus making it 0.9 watt/m2. Furthermore, it was announced that operators who are found flouting these rules would be heavily penalised.
While many welcomed this news, the critics were quick to point out that even this was not safe. There has been an ongoing debate about whether the radiation being emitted from the mobile phone towers can be a cause of cancer. The answer to this question is a tricky one as the scientific data available to date doesn’t clearly state whether or not radiation emitted from the mobile phone towers can cause cancer.  Even the WHO report terms it a probable factor. Government officials as well as the operators are using the lack of proper scientific evidence as a defensive shield to fend off critics.

Even with the absence of scientific data to determine their role, there are many who are convinced that these towers are indeed death traps. And their belief is backed by the instances that have been witnessed in the country, be it the Kaiswal family from Jaipur where three family members were detected with cancer after installation of mobile phone towers five metres away from their house, or the Usha Kiran building in Mumbai that cited three cases of brain tumour that were attributed to the mobile phone towers installed on the rooftop of an adjacent building. While some may shrug these off as mere coincidences, several housing societies have now come forward to protest against these towers.
According to an estimate, currently there are around five lakh mobile phone towers in India. And today, thanks to the ever increasing popularity of mobile phones, it’s imperative for the operators to install towers to provide coverage. This will further increase their number in the future. With lack of conclusive evidence about their safety or even their role in causing cancer, the common man is at crossroads, especially those living around these towers.

We spoke to industry authorities, medical experts and researchers to find answers to the question that’s on everyone’s mind – are these really towers of death?

The matter of radiation 
India has adopted the International Commission on Non-Ionizing Radiation Protection Board (ICNIRP) norms for the telecom sector, which are considered to be the best in the world. Recently, the radiation levels were further reduced by 1/10th of the current levels as a precautionary measure. Speaking on the matter, Rajan S Mathews, Director General, Cellular Operators Association of India (COAI) said, “The Inter Ministerial Committee, as a precautionary measure, recommended that the standards be further lowered to 1/10th of the present ICNIRP standards, despite there being no scientific evidence stating any increased health benefit from the proposed directive, the industry has gone an extra mile to ensure compliance with the same. The base station/mobile tower is essentially responsible for the signals, coverage and the quality of service in the location it is installed.  As per the mandate for coverage in Unified Access Service License (UASL) given by DoT, the clause 34.2 states: ‘Coverage of a DHQ/town would mean that at least 90% of the area bounded by the Municipal limits should get the required street as well as in-building coverage.’ Therefore, while making any changes in the RF planning, operators have to ensure that they are in compliance to the Quality of Service requirements as mandated by license and regulatory conditions. The Operators have worked under very tight deadlines to re-align their networks to provide desired QoS and coverage while bringing down the emissions levels and 95% of all towers owned by our members are fully compliant to the new. The industry is putting in serious efforts to achieve the same even in case of remaining five percent of towers after resolving minor operational issues.” 

While the telecom operators seem happy to have done their bidding by agreeing to abide by the latest norms, there are many who believe that the radiation norms need to be reduced further. Most vocal amongst them is Prof. Girish Kumar, Electrical Engineering Department, IIT Bombay. He has conducted extensive research in the field and has presented his findings to DoT, but his suggestions have so far been ignored. He opines that the current radiation levels, even after being reduced, are high and can cause health troubles in the long run. Having himself experienced the ill effects of radiation owing to the nature of his work, he warns others of the ill effects. He is quite critical when voicing his opinion. He says, “I have met with industry bodies and even government officials with my research. Earlier I used to think that these people were not knowledgeable, so I thought let me make them aware about the health problems, but now I know better. They are akin to the cigarette industry and are waiting for millions of people to die. They will keep denying that there are any health problems. Now they have stopped saying that there is no evidence, what they are saying instead is that there are no concrete evidence.” 

He goes on to elaborate, “What is happening right now is that they are transmitting huge amount of power from one rooftop, as each carrier frequency can transmit upto 20 watt of power and there may be 3-4 operators on one rooftop. This means that the total transmitted power may be 200 to 400W. And why are they transmitting more power? The answer is simple, because they can cover several kilometres. Now what’s happening is that people who are living within few hundred meters get very high radiation. Ideally, they should reduce the transmitted power and from one place it shouldn’t be more than 1-2 watts, but if they do so then their range will reduce and they will have to put up either more towers or repeater or a booster.”  When we asked COAI, as to whether the current reduction in the levels of radiation impacted the network coverage, we were informed that it would be some time before the impact on the coverage and the signal strength could be ascertained. But the operators would ensure that there is no dearth in the Quality of Service offered to the consumers to the maximum possible extent.
A few years ago, Prof. Kumar even developed an instrument to study the radiation levels, using which he has surveyed several areas and found the radiation levels to be high. He also developed radiation shield, first for himself and then established a company to sell it commercially. His harsh criticism of the telcos and the government is often countered with accusations of wanting to promote his own commercial interest. Clearing the air he says, “Being an entrepreneur myself, I understand that no one will want to run their business at a loss. If telecos were to reduce the radiations levels even further, then they will have to invest in more towers to strengthen the network coverage. So I even provided them with the solution, where by increasing the call rate per minute by just say 5 paise, they could be profitable in couple of years.” He goes on to add that if the telcos reduced the radiation to safe levels, then it was his company that would be at a loss, as people wouldn’t need shielding solutions.

But with cut-throat competition, it’s highly unlikely that the telcos would want to risk increasing prices. Anuj Jain, a telecom engineer who is in agreement with Prof. Kumar on most parts, especially about the need to further reduce the radiation levels in the country, believes that the telecos would not want to increase prices. Anuj is a resident of South Mumbai and his house faces one of the mobile phone towers. He became concerned about this when his wife was expecting, because as a telecom engineer, he was only too aware of the effects of radiation, especially on pregnant women and young children. He says, “We have a cell phone tower that faces our bedroom and the antenna are at the same level as our flat. I was concerned about the effects of radiation on my wife and our baby and that’s what prompted me to start looking for solutions. I knew there needs to be a policy change, but my concern was in the mean time what should a common man do?” He found the answer in the form of radiation curtains, which contain precious metals and help absorb radiation.
Anuj also conducts radiation audits and spreads awareness about the effects of radiation. He says, “Having conducted radiation audits in and around the area I live, I can say that the situation is quite grave. Today the towers are everywhere. I have been constantly working with people to create awareness. I am a committee member in my own building and we ourselves have a tower on our building. But ours is the tallest building in the vicinity and having a tower on our building lessens the risk, then having the tower on the building adjacent to us. So if your building has a lot of sight and no building around it can get affected then that is the ideal spot.” Housing societies earn money from allowing the installation of mobile phone towers on their roofs, but there is a growing amount of dissent with many groups of housing societies protesting against the installation of these towers.
While at present the radiation levels have been dropped by 1/10th and it’s being claimed that the majority of the operators have complied with the norms, it’s very difficult to ascertain the truth. To address the growing concerns of the people, DoT recently launched a public helpline and web portal for the Mumbai Telecom Circle, where complaints against radiation emitted from mobile towers can be registered. It can be accessed from the DoT website, under the link “Public Grievance – EMF Radiation”. The Telecom Engineering Centre (TEC), the technical wing of the Ministry of Communications, has a test procedure for the measurement of exposure levels and their Telecom Enforcement Resource & Monitoring (TERM) cell will then conduct an audit of the site. You will have to pay Rs. 4,000 and if the site is found to be non-compliant to the norms, then the amount will be refunded.
Do mobile phone towers cause cancer?
Mobile phones work on electromagnetic radiation technology. The spectrum of electromagnetic radiation is large with varying frequencies and amplitude across the spectrum (for instance – radio that you listen to is also a form of electromagnetic radiation with different frequency and amplitude). The highest end of the spectrum is called ionising radiation and is used for therapeutic radiation to treat cancer, while the lower end of the spectrum is known as radio frequency (RF waves). Just next step to the radio frequency waves are the microwave waves. Mobile phone technology uses this microwave end of the spectrum, which is roughly about 300 MegaHertz and falls in the non-ionising category. Similarly, radiations emitted by mobile phone towers lie in the non-ionising part of the electromagnetic spectrum.

Anonymous hacked ImageShack and Symantec

Earlier on 5th November a well known hacker group, Hack the planet had released a Zine which contains breached information on 2 well known website image service Imageshack and anti virus giant Symantec and a 0 day exploit has been released for  ZPanel Hosting control panel systems.

Even though the leak is clearly marked as being done by HTP other media has been reporting these two attacks as part of Anonymous #Nov5 attacks which have started today.

anonymous

 

The leaked data was uploaded to various places, and contains a heap of information from the Imageshack server as well as all the exploits or vulnerabilities they had found and a reason behind the attack  ”Well, we like a challenge, so we decided to find out what changes were made. “.

NBC’s site wasn’t the only one to have been hacked because of Guy Fawkes Day. Over the past day, a number of apparently Anonymous-affiliated hackers have gone after LG, ImageShack, Symantec, and other sites, either defacing them or publishing what they claim is private data. In the former category, Argentina’s Caja Popular bank temporarily bore an AntiSec banner and a manifesto supporting Jeremy Hammond, who was arrested as part of a sweep against LulzSec in March. The site now appears to be down. In the latter group, the evidence of hacking is less clear but the implications potentially worse.

“IMAGESHACK HAS BEEN COMPLETELY OWNED, FROM THE GROUND UP.”

One Pastebin document contains what its authors say is data from ImageShack and Symantec servers; they claim that “ImageShack has been completely owned, from the ground up. We have had root and physical control of every server and router they own. For years.” In another incident, “UR0B0R0X” posted account email addresses and password hashes allegedly from LG Smart World, and someone else uploaded the alleged details for 28,000 PayPal accounts.

With the exception of the highly visible Caja Popular hack, these security breaches haven’t been confirmed. PayPal’s Anuj Nayar has said on Twitter that the company has “been unable to find any evidence that validates” the claim. Symantec told us that it is investigating the alleged hack but that “we have found no evidence that customer information was exposed or impacted,” which doesn’t rule out some kind of compromise. Meanwhile, more incidents are still being reported as these loose combinations of prank and civil liberties protest continue through the 5th of November.

Update: The PayPal hack, at least, appears to have actually affected another service instead — it was reportedly targeting the ZPanel control panel.

Halo 4

Master Chief returns in Halo 4, part of a new trilogy in the colossal Halo universe.

Set almost five years after the events of Halo 3, Halo 4 takes the series in a new direction and sets the stage for an epic new sci-fi saga, in which the Master Chief returns to confront his destiny and face an ancient evil that threatens the fate of the entire universe. Halo 4 also introduces a new multiplayer offering, called Halo Infinity Multiplayer, that builds off of the Halo franchise’s rich multiplayer history. The hub of the Halo 4 multiplayer experience is the UNSC Infinity – the largest starship in the UNSC fleet that serves as the center of your Spartan career. Here you’ll build your custom Spartan-IV supersoldier, and progress your multiplayer career across all Halo 4 competitive and cooperative game modes.

No console shooter has a richer, deeper, more revered multiplayer history than Halo. So how does Halo 4’s multiplayer suite live up to the legacy in 343’s hands?

It’s golden.

Halo has evolved, wrapping its multiplayer in an unexpected narrative context – the Spartan-on-Spartan battles are presented as training sessions aboard the UNSC Infinity ship – complete with more of the same visually arresting introductory cutscenes for both the adversarial War Games and the new Spartan Ops co-op mode.

With Halo 4’s immaculate weapon balancing and gun-for-every-situation combat strategies, it needs only a great crop of multiplayer maps in order to qualify for classic status. Fear not, as 343 packs War Games with 10 mostly stellar stages and three additional Forge-built battlegrounds. Exile leads the vehicle-heavy Big-Team Battle complement, Ragnarok shines as a Mantis-showcasing remake of Halo 3’s Valhalla, and Haven is among the series’ all-time finest small and symmetrical levels. Oh, and one of the official Forge constructions, Settler, is a smaller, crazier evolution of the franchise’s most famous map that I absolutely love: Blood Gulch. Halo 4 might not have its instant-classic (a la Halo 2’s Lockout), but this is an impressive collection of outstanding battlegrounds, with a seemingly greater emphasis placed on the large-scale, vehicle-inclusive levels that are Halo’s bread-and-butter.

Of course, gorgeous graphics are only one responsibility a console’s killer app must bear. Perhaps equal to Halo 4’s monitor-melting visuals is its bar-none, best-in-class sound design. If you think you’ve heard Halo, check your ears and listen again. Nary a gunshot, MJOLNIR boot clank, or Covenant Elite’s “Wort wort wort” passes through your speakers without a significant, authoritative overhaul that lends an aggressive, testosterone-inducing punch to Halo 4’s combat.

Few game series are known as much for their music as Halo, and thus much has been made of British electronica producer Neil Davidge taking over for the beloved Bungie incumbent, Marty O’Donnell. It’s a bold shift – and probably wise of 343 to go in a tonally different direction rather than attempt to emulate O’Donnell – but the results are mixed. The trademark monk chants are gone, and Davidge’s moody tunes are complementary rather than additive. The new tracks simply aren’t memorable and never elevate the action happening on the screen the way that O’Donnell’s bombastic scores did, though this may be intentional, as Davidge’s compositions are decidedly atmospheric.

THE VERDICT

Cortana once asked Master Chief what would happen if he missed his target, and in the single greatest line of dialogue in Halo history, Chief replied with the coolest, calmest confidence, “I won’t.”

Release Date: November 6, 2012
MSRP: 59.99 USD
M for Mature: Blood, Violence
Genre: First-Person Shooter
Publisher: Microsoft Game Studios
Developer: 343 Industries

Max Payne 3

Max Payne has suffered beyond reasonable limits. (It’s all in the name.) Nine years have passed since the last game in the series, yet little has changed for its long-suffering protagonist, who remains deeply traumatised by the death of his wife and child. ‘Trauma’ is the key word – in Greek, it means ‘wound’, and Max is someone who has never let his fully heal. To move on would be to forget – a betrayal of those he loved – and so instead he chooses to wallow in the past and the pain, with the help of brown liquor and white pills.

But thankfully, Max Payne 3 isn’t content to simply relive the past, and makes bold stylistic and narrative decisions to avoid stagnation. And though these choices have significant consequences on the game’s pacing that may prove divisive, Max Payne 3 is overall a brilliant, darkly-engrossing third outing for one of video games’ most troubled characters.

Wherever you go, there you are. It’s a truth Max Payne knows better than anyone. Fleeing his New York life to take a job working security for a wealthy family in Sao Paulo, the hard-drinkin’, pill-poppin’ Max finds that his demons come along for the ride. Though the details of the plot add up to your typical story of conspiracy and corruption, of the rich and powerful preying on the poor and helpless to become even more rich and powerful, the writing, acting, and presentation elevate this tale well above a boilerplate video game crime story.

It’s hard to stay ambivalent once you see the horrors being suffered by the innocent here, and you’ll likely want to see Max’s quest for vengeance through to its conclusion just as badly as he does. Max reveals a complexity here not seen in earlier games, as he hits rock bottom and must either stay there or face his demons head-on and make himself anew. Other characters, too, reveal a surprising humanity. You might be tempted to write off Marcelo, the youngest brother in the wealthy Branco dynasty Max is hired to protect, as the shallow playboy he often appears to be. But in moments of disarming honesty, he reveals to Max a depth that lies beneath the facade he presents to the world.

Cutscenes use multiple moving panels to pay homage to the graphic-novel-style storytelling of previous games without feeling beholden to it, and the considered use of blurring and other visual effects echo Max’s state of mind, perhaps making you feel as if you’re the one who has been hitting the bottle a little too hard. James McCaffrey does an excellent job reprising his role as Max, bringing a wider range of emotions to a character who has previously often been one-note. The writing is terrific; Max’s world-weary wit is as bone-dry as ever, and as he ruminates on things like loyalty and loss, much of what he says has the sound of hard-earned wisdom. Subtle touches throughout the game make Max seem convincingly alive, such as the complex look that crosses over his face at the start of one stage when bloodshed seems inevitable; it’s as if he dreads what’s coming, but does his best to mentally prepare himself for it.

Verdict

So, should you play this game? Hell yeah! It does not matter if you haven’t been a follower of the Max Payne franchise, there isn’t much in common except for the one chapter which takes us back to Max’s past in NYC. Gameplay, bullet-time, graphics and sound are brilliant. The story, though not as layered as it was in the first two instalments, still offers the plot twists to keep you engaged. Level design in varied terrains is a definite plus. If you are a third person shooter fanatic, the decision to buy this game is a no-brainer.

Published by: Rockstar Games

Developed by: Rockstar Studios
Genre: Third-Person Shooter
Release Date:
United States: May 15, 2012
UK: May 18, 2012
Australia: May 18, 2012
MSRP: 59.99 USD
M for Mature : Blood and Gore, Intense Violence, Partial Nudity, Strong Language, Strong Sexual Content, Use of Drugs and Alcohol
Also Available On: PC, PS3
Also known as: Max Payne 3