Tuesday, 12 July 2011

Wireless Power Transmission-The Dawn of a New Era

Unless you are particularly organized and good with tie wrap, you probably have a few dusty power cord tangles around your home. You may have even had to follow one particular cord through the seemingly impossible snarl to the outlet, hoping that the plug you pull will be the right one. This is one of the downfalls of electricity. While it can make people's lives easier, it can add a lot of clutter in the process.
For these reasons, scientists have tried to develop methods of wireless power transmission that could cut the clutter or lead to clean sources of electricity. While the idea may sound futuristic, it isn't particularly new. Nicola Tesla proposed theories of wireless power transmission in the late 1800s and early 1900s. One of his more spectacular displays involved remotely powering lights in the ground at his Colorado Springs experiment station.
The wireless transmission of energy is common in much of the world.Radio waves are energy, and people use them to send and receive cell phone, TV, radio and WiFi signals every day. The radio waves spread in all directions until they reach antennae that are tuned to the right frequency. A similar method for transferring electrical power would be both inefficient and dangerous.
The answer lies in Inductive Coupling It uses magnetic fields that are a natural part of current's movement through­ wire. Any time electrical current moves through a wire, it creates a circular magnetic field around the wire. Bending the wire into a coil amplifies the magnetic field. The more loops the coil makes, the bigger the field will be.
If you place a second coil of wire in the magnetic field you've created, the field can induce a current in the wire. This is essentially how a transformer works, and it's how an electric toothbrush recharges. It takes three basic steps:
1.Current from the wall outlet flows through a coil inside the charger, creating a magnetic field. In a transformer, this coil is called the primary winding.
2.When we place this transformer in the circuit, the magnetic field induces a current in another coil, or secondary winding, which connects to the circuit.
3.This current charges the grid.
Household devices produce relatively small magnetic fields. For this reason, chargers hold devices at the distance necessary to induce a current, which can only happen if the coils are close together. A larger, stronger field could induce current from farther away, but the process would be extremely inefficient. Since a magnetic field spreads in all directions, making a larger one would waste a lot of energy.
In November 2006, however, researchers at MIT reported that they had discovered an efficient way to transfer power between coils separated by a few meters. The team, led by Marin Soljacic, theorized that they could extend the distance between the coils by adding resonance to the equation.

Research at MIT indicates that induction can take place a little differently if the electromagnetic fields around the coils resonate at the same frequency. The theory uses a curved coil of wire as an inductor. A capacitance plate, which can hold a charge, attaches to each end of the coil. As electricity travels through this coil, the coil begins to resonate. Its resonant frequency is a product of the inductance of the coil and the capacitance of the plates.
If both coils are out of range of one another, nothing will happen, since the fields around the coils aren't strong enough to affect much around them. Similarly, if the two coils resonate at different frequencies, nothing will happen. But if two resonating coils with the same frequency get within a few meters of each other, streams of energy move from the transmitting coil to the receiving coil. According to the theory, one coil can even send electricity to several receiving coils, as long as they all resonate at the same frequency. The researchers have named this non-radiative energy transfer since it involves stationary fields around the coils rather than fields that spread in all directions.




Scientists all over the world have built various prototypes but none of them could be put to mass production because of initial set up cost and the hazards linked with it. However Intel has made an attempt to go ahead with it and have made a working prototype.
Intel's Wireless Power Transmission Model



Wireless power techniques mainly fall into two categories, non-radiative and radiative. In near field or non-radiative techniques, power is transferred over short distances by magnetic fields using inductive coupling between coils of wire, or by electric fields using capacitive coupling between metal electrodes.[2][3][4][5] Inductive coupling is the most widely used wireless technology; its applications include charging handheld devices like phones and electric toothbrushes, RFID tags, and wirelessly charging or continuous wireless power transfer in implantable medical devices like artificial cardiac pacemakers, or electric vehicles.  


Saturday, 9 July 2011

Memristor- The Fourth Fundamental Circuit Element

The Next Big thing? The memristor, a microscopic component that can "remember" electrical states even when turned off. It's expected to be far cheaper and faster than flash storage. A theoretical concept since 1971, it has now been built in labs and is already starting to revolutionize everything we know about computing, possibly making flash memory, RAM, and even hard drives obsolete within a decade.
Since the dawn of electronics, we've had only three types of circuit components--resistors, inductors, and capacitors. But in 1971, UC Berkeley researcher Leon Chua theorized the possibility of a fourth type of component, one that would be able to measure the flow of electric current: the memristor. Now, just 37 years later, Hewlett-Packard has built one.

 As its name implies, the memristor can "remember" how much current has passed through it. And by alternating the amount of current that passes through it, a memristor can also become a one-element circuit component with unique properties. Most notably, it can save its electronic state even when the current is turned off, making it a great candidate to replace today's flash memory.
Memristors will theoretically be cheaper and far faster than flash memory, and allow far greater memory densities. They could also replace RAM chips as we know them, so that, after you turn off your computer, it will remember exactly what it was doing when you turn it back on, and return to work instantly. This lowering of cost and consolidating of components may lead to affordable, solid-state computers that fit in your pocket and run many times faster than today's PCs.
The memristor could spawn a whole new type of computer, thanks to its ability to remember a range of electrical states rather than the simplistic "on" and "off" states that today's digital processors recognize. By working with a dynamic range of data states in an analog mode, memristor-based computers could be capable of far more complex tasks than just shuffling ones(1) and zeroes(0) around.
Microscopic image of Memristor

An atomic force microscope image of a simple circuit with 17 memristors lined up in a row.  Each memristor has a bottom wire that contacts one side of the device and a top wire that contacts the opposite side.  The devices act as 'memory resistors', with the resistance of each device depending on the amount of charge that has moved through each one. The wires in this image are 50 nm wide, or about 150 atoms in total width.

 Advantages

1. Will replace the existing Random Access Memory(RAM) and Dynamic Random Access Memory(DRAM).
2. Denser cells allow memristor circuits to store more data than flash memory.
3. A memristor circuit requires lower voltage, less power and less time to turn on than other memories.
4. It does not require power to maintain its memory.
5. Provides  the ability to store a vast array of states rather than only (1) and (0). This could lead to different class      of computing capabilities.

Saturday, 23 April 2011

The Bit-Torrent Protocol

BitTorrent is a protocol that enables fast downloading of large files using minimum Internet bandwidth. It costs nothing to use and includes no spyware or pop-up advertising.



Unlike other download methods, BitTorrent maximizes transfer speed by gathering pieces of the file you want and downloading these pieces simultaneously from people who already have them. This process makes popular and very large files, such as videos and television programs, download much faster than is possible with other protocols.

In this article, we'll examine how BitTorrent works and how it is different from other file-distribution methods. In addition, you'll learn how to use BitTorrent and what the future might hold for this innovative approach to serving files over the Internet.

Traditional Client-Server Downloading

To understand how BitTorrent works and why it is different from other file-serving methods, let's examine what happens when you download a file from a Web site. It works something like this:

1.You open a Web page and click a link to download a file to your computer.

2.The Web browser software on your computer (the client) tells the server (a central computer that holds the Web page and the file you want to download) to transfer a copy of the file to your computer.

3.The transfer is handled by a protocol (a set of rules), such as FTP (File Transfer Protocol) or HTTP (HyperText Transfer Protocol).

Client-server download process

Peer-to-peer File Sharing

Peer-to-peer file sharing is different from traditional file downloading. In peer-to-peer sharing, you use a software program (rather than your Web browser) to locate computers that have the file you want. Because these are ordinary computers like yours, as opposed to servers, they are called peers. The process works like this:
1.You run peer-to-peer file-sharing software (for example, a Gnutella program) on your computer and send out a request for the file you want to download.

2.To locate the file, the software queries other computers that are connected to the Internet and running the file-sharing software.

3.When the software finds a computer that has the file you want on its hard drive, the download begins.

4.Others using the file-sharing software can obtain files they want from your computer's hard drive.

The file-transfer load is distributed between the computers exchanging files, but file searches and transfers from your computer to others can cause bottlenecks. Some people download files and immediately disconnect without allowing others to obtain files from their system, which is called leeching. This limits the number of computers the software can search for the requested file.

What BitTorrent Does

Unlike some other peer-to-peer downloading methods, BitTorrent is a protocol that offloads some of the file tracking work to a central server (called a tracker). Another difference is that it uses a principal called tit-for-tat. This means that in order to receive files, you have to give them. This solves the problem of leeching. With BitTorrent, the more files you share with others, the faster your downloads are. Finally, to make better use of available Internet bandwidth (the pipeline for data transmission), BitTorrent downloads different pieces of the file you want simultaneously from multiple computers.

Here's how it works:

Bit-Torrent's download process


BitTorrent's peer-to-peer download process

1.You open a Web page and click on a link for the file you want.

2.BitTorrent client software communicates with a tracker to find other computers running BitTorrent that have the complete file (seed computers) and those with a portion of the file (peers that are usually in the process of downloading the file).

3.The tracker identifies the swarm, which is the connected computers that have all of or a portion of the file and are in the process of sending or receiving it.

4.The tracker helps the client software trade pieces of the file you want with other computers in the swarm. Your computer receives multiple pieces of the file simultaneously.

5.If you continue to run the BitTorrent client software after your download is complete, others can receive .torrent files from your computer; your future download rates improve because you are ranked higher in the "tit-for-tat" system.

Downloading pieces of the file at the same time helps solve a common problem with other peer-to-peer download methods: Peers upload at a much slower rate than they download. By downloading multiple pieces at the same time, the overall speed is greatly improved. The more computers involved in the swarm, the faster the file transfer occurs because there are more sources of each piece of the file. For this reason, BitTorrent is especially useful for large, popular files.

This is how bit-torrent works, hope this helps answer your queries. 

Tuesday, 8 March 2011

CLOUD COMPUTING : The Fifth Generation of Computing (after Mainframe, Personal Computer, Client-Server Computing, and the Web)




What is Cloud Computing ?

The Internet is often represented as a cloud and the term “cloud computing” arises from that analogy. Accenture defines cloud computing as the dynamic provisioning of IT capabilities (hardware, software, or services) from third parties over a network. McKinsey says that clouds are hardware-based services offering compute, network and storage capacity where: hardware management is highly abstracted from the buyer; buyers incur infrastructure costs as variable OPEX [operating expenditures]; and infrastructure capacity is highly elastic . The cloud model differs from traditional outsourcing in that customers do not hand over their own IT resources to be managed. Instead they plug into the cloud, treating it as they would an internal data centre or computer providing the same functions.


The great advantage of cloud computing is “elasticity”: the ability to add capacity or applications almost at a moment’s notice. Companies buy exactly the amount of storage, computing power, security and other IT functions that they need from specialists in data-centre computing. They get sophisticated data centre services on demand, in only the amount they need and can pay for, at service levels set with the vendor, with capabilities that can be added or subtracted at will. The metered cost, pay-as-you-go approach appeals to small- and medium-sized enterprises; little or no capital investment and maintenance cost is needed. IT is remotely managed and maintained, typically for a monthly fee, and the company can let go of “plumbing concerns”. Since the vendor has many customers, it can lower the per-unit cost to each customer. Larger companies may find it easier to manage collaborations in the cloud, rather than having to make holes in their firewalls for contract research organizations. Now the second thing that comes to our mind is how big is cloud computing......

$42B
This is the estimated size of the cloud computing Infrastructure market in 2012, which was only $16B in 2008, IDC October 2008

What a cloud consists of


TYPES OF CLOUDS

1.IaaS
Infrastructure as a Service (IaaS) also referred to as Resource Clouds, provide (managed and scalable) resources as services to the user – in other words, they basically provide enhanced virtualisation capabilities. Accordingly, different resources may be provided via a service interface: Data & Storage Clouds deal with reliable access to data of potentially dynamic size, weighing resource usage with access requirements and / or quality definition.
Examples: Amazon S3, SQL Azure.

2. PaaS
Platform as a Service (PaaS), provide computational resources via a platform upon which applications and services can be developed and hosted. PaaS typically makes use of dedicated APIs to control the behaviour of a server hosting engine which executes and replicates the execution according to user requests (e.g. access rate). As each provider exposes his / her own API according to the respective key capabilities, applications developed for one specific cloud provider cannot be moved to another cloud host – there are however attempts to extend generic programming models with cloud capabilities .
Examples: Force.com, Google App Engine, Windows Azure (Platform).

3. SaaS
Software as a Service (SaaS), also sometimes referred to as Service or Application Clouds are offering implementations of specific business functions and business processes that are provided with specific cloud capabilities, i.e. they provide applications / services using a cloud infrastructure or platform, rather than providing cloud features themselves. Often, kind of standard application software functionality is offered within a cloud.
Examples: Google Docs, Salesforce CRM, SAP Business by Design.

On the basis of the deployment modes of clouds, they can be categorized into 5 types:

1. Private Clouds are typically owned by the respective enterprise and / or leased. Functionalities are not directly exposed to the customer, though in some cases services with cloud enhanced features may be offered – this is similar to (Cloud) Software as a Service from the customer point of view.
Example: eBay.


2. Public Clouds . Enterprises may use cloud functionality from others, respectively offer their own services to users outside of the company. Providing the user with the actual capability to exploit the cloud features for his / her own purposes also allows other enterprises to outsource their services to such cloud providers, thus reducing costs and effort to build up their own infrastructure. As noted in the context of cloud types, the scope of functionalities thereby may differ.
Example: Amazon, Google Apps, Windows Azure.



3. Hybrid Clouds . Though public clouds allow enterprises to outsource parts of their infrastructure to cloud providers, they at the same time would lose control over the resources and the distribution / management of code and data. In some cases, this is not desired by the respective enterprise. Hybrid clouds consist of a mixed employment of private and public cloud infrastructures so as to achieve a maximum of cost reduction through outsourcing whilst maintaining the desired degree of control over e.g. sensitive data by employing local private clouds. There are not many hybrid clouds actually in use today, though initial initiatives such as the one by IBM and Juniper already introduce base technologies for their realization .


4. Community Clouds. Typically cloud systems are restricted to the local infrastructure, i.e. providers of public clouds offer their own infrastructure to customers. Though the provider could actually resell the infrastructure of another provider, clouds do not aggregate infrastructures to build up larger, cross-boundary structures. In particular smaller SMEs could profit from community clouds to which different entities contribute with their respective (smaller) infrastructure. Community clouds can either aggregate public clouds or dedicated resource infrastructures.


5.Special Purpose Clouds. In particular IaaS clouds originating from data centres have a “general purpose” appeal to them, as their according capabilities can be equally used for a wide scope of use cases and customer types. As opposed to this, PaaS clouds tend to provide functionalities more specialized to specific use cases, which should not be confused with “proprietary” of the platform: specialization implies providing additional, use case specific methods, whilst proprietary data implies that structure of data and interface are specific to the provider.

Cloud Computing Architecture
  
When talking about a cloud computing system, it's helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the "cloud" section of the system.
The front end includes the client's computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.
On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.
A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other.
If a cloud computing company has a lot of clients, there's likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients' information stored. That's because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients' information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable.
Layers in a cloud computing environment


Visual Explanation of Cloud Computing and related concepts

Monday, 7 March 2011

UEFI : THE SUPER BIOS (Faster, Safer and more Efficient)


The Unified Extensible Firmware Interface (UEFI) specification defines a new model for the interface between operating systems and platform firmware. The interface consists of data tables that contain platform-related information, plus boot and runtime service calls that are available to the operating system and its loader. Together, these provide a standard, modern environment for booting an operating system and running pre-boot applications.



The BIOS used by PC/AT compatible systems is based on the x86, 16-bit, real-mode architecture. However UEFI is not specific to any processor architecture, so it can perform the same function as BIOS while evolving with new technologies.  The current UEFI specification allows operating systems to boot using modern modes and calling conventions.                                          
                                                                             
  
This is what the UEFI screen looks like(simply amazing!!)


UEFI is the successor to EFI.  Intel contributed EFI 1.10 (the final version of EFI) as the starting point

for UEFI specifications.  Enhancements are managed by the UEFI Forum.  Intel retains copyright to the EFI 1.10 specification. UEFI works to standardize two primary functions of the PC Basic Input/Output System (BIOS): firmware-to-OS interface and platform initialization. The UEFI Specification Working Group (USWG) creates the UEFI specification, describing a firmware-to-OS interface analogous to BIOS software interrupts and the BIOS data area (BDA). The Platform Initialization Working Group (PIWG) specifications are intended to promote interoperability between firmware components provided by different entities, such as silicon vendors and firmware vendors.



UEFI was designed to bring modularity to system firmware by implementing a driver-based approach to platform initialization. Hardware and device chipsets will have a UEFI driver that allows the system firmware to initialize them through a standard API rather than having to program them directly.   


     
UEFI Architecture


UEFI also establishes its own pre-OS environment, including APIs and services that can be used to create applications for a variety of purposes, including configuration. This is known as the UEFI Shell.
 

         
UEFI enabled Motherboard

The UEFI boot process in comparison to BIOS boot process


Both the old BIOS and its successor UEFI provide interface between components of the motherboard and the operating systems. So as to speed up the boot time UEFI provides some clever functions as illustrated.



Assasin's Creed: Brotherhood "Game Review" (Rating: 4/5)




 It’s been an interesting watching Ubisoft figure out the Assassin’s Creed games. The first of the series was very rough but still established an interesting sci-fi storyline and showed the promise of a stealth-driven open-world mechanics beneath it all. Things really started to click with Assassin’s Creed II, which helped in making rough edges smooth. Assassin’s Creed: Brotherhood, while so dependent on the story of AC2 that it seems like Ubisoft made a mistake by not calling it Assassin’s Creed II: Brotherhood, is the most assured entry in the series so far, even if it’s not the most evolutionary.



If you finished Assassin’s Creed II, you no doubt have some burning questions about just what in the holy hot hell is going on in this universe, which has pretty rapidly escalated its already tripped-out, high-concept premise of ancient, clandestine groups of Templars and assassins using genetic memories to hunt down powerful artifacts of mysterious, even more ancient origin. Picking up right where ACII left off, Brotherhood sees Ezio after recovering the Pieces of Eden and learning a bit about their true nature. While he finds some time for the ladies, it’s not long before a new Templar threat, in the form of the categorically creepy Borgia family, is literally knocking on his door, stealing the Apple of Eden, destroying his country villa, and further shattering the remains of the Auditore family, all in dramatic fashion. With the Apple in their possession, and the backing of the Pope himself, the Borgia have absolute power over Rome, and Ezio spends the grand majority of the game systematically sabotaging their power structure while also gaining the support of the people of Rome for his own chapter of the assassin’s guild. 
 Several key supporting characters from Assassin’s Creed II return here, including Ezio’s sister Claudia and his love interest Caterina Sforza, as well as the historically unfettered caricatures of Niccolo Machiavelli and Leonardo da Vinci, all of whom are given room to develop. Though you’ll spend more meaningful time here playing as Desmond than in the first two games combined, his story mostly bookends the main narrative, exploring his burgeoning relationship with his ragtag team of modern assassins and hinting further at just what in the holy hot hell is going on in this universe. Frankly, I found myself so engrossed in the nuts and bolts of the world itself that a lot of the specific story beats were kind of a blur. In a way, the predestined nature of Ezio’s story makes it not really matter in the grand scheme, though the final moments with Desmond left me feeling baffled by the loose ends left in the game.




 A quick intro video recaps the highlights of the story so far for those who haven’t kept up or simply forgot. Similarly, the first several hours of Assassin’s Creed: Brotherhood are uncharacteristically linear, spending ample time easing you into the particular feel of the Assassin’s Creed movement and combat, while also introducing you to the new stuff. The basic handling no different from Assassin’s Creed II, and by and large, you’re performing the same types of tasks--sneaking through the dense streets and gymnastic rooftops, coolly tracking and assassinating specific targets, and juggling groups of enemy combatants. The group combat has never been Assassin’s Creed’s strongest point, though Brotherhood streamlines it in a way that makes stringing together one-hit kills an absolute breeze. This allows you to chew through your enemies, which, in kind, lets the game throw more enemies at you. It doesn’t really address the fundamentally shallow feel, but the satisfaction of watching Ezio fluidly conducts his dark businesses certainly helps take some of the edge off.


A lot of the activities introduced in Assassin’s Creed II are expanded on, or at the very least, multiplied. The real-estate angle from ACII has been opened up to let you purchase dozens of blacksmiths, art dealers, banks, and clothing stores, all the way up to actual Roman landmarks like the Pantheon and the Colosseum, with each purchase increasing your periodic income. Before you can start getting your Monopoly on, you need to rid the area of the Borgias' corrupting influence by destroying their tower in the district, a task which adds a little assassination mission to the tower-climbing business that has been an Assassin’s Creed staple. Unlike Assassin’s Creed II, you can now replay any story mission out of order, and each mission now has a secondary win condition, such as never breaking stealth or executing enemies in a specific fashion. There are side missions and challenges to be taken from thieves, mercenaries, and courtesans, a fun series of missions tasking you with stealing, using, and destroying some fantastical da Vinci-built war machines that the Borgia have in their possession, pickpockets and Borgia agents to chase around the city, shop-specific collection goals, and just a load of posters, feathers, flags, and treasure chests to be collected. At a certain point, the world of Brotherhood opens up, and it kind of never stops blossoming, and it constantly introduces something new to do. When you pull up the map to try and decide what you’re going to do next, it can be downright overwhelming. This is not a short game, and while I didn’t stick to the critical path, I probably spent a good 20 hours with Assassin’s Creed: Brotherhood.

 Depending on your play style and how much time you spend on all the stuff outside the main story missions, it might be a while before you get to start recruiting and training your own brotherhood of assassins, something that’s new to Brotherhood and easily my favorite addition. As you’re running your vicious errands around Rome, you’ll find citizens fighting it out with the corrupt city guards, and if you rescue them, they’ll pledge allegiance to your cause. Once recruited, you can earn more money and build up the experience of your individual assassins by sending them out on jobs across Europe. Any assassins that aren’t out on assignment, though, can be used to take care of any enemies you don’t want to dirty your hands with, simply by targeting the enemies in question and tapping a button. It really compounds the cool confidence Ezio has when you watch him raise his hand and then calmly stroll by as his minions appear out of nowhere and explore his enemies’ head and body cavities with various razor-sharp implements. Through the entire game, it never failed to elicit a satisfied chuckle from me when an assassin would just hop out of the nearest haystack or simply fall from the sky ready to do some pro-grade stabbin’, but there are a number of missions designed to necessitate their use. The only adjustment in Brotherhood that cut the wrong way for me was the decision to put horses inside the city limits. Rome is a massive city with significant tracts of open space that are well-suited for short bursts of the kind of horseback riding seen in the previous games, but they just don’t fit in the dense alleys and abundant staircases of the city’s urban centers. There’s even a new fast-travel system that doesn’t shrink the city too much, but which makes the horses feel at least somewhat redundant.

The biggest gamble in Assassin’s Creed: Brotherhood is the new multiplayer mode, which goes to some pretty clever lengths to adopt something recognizable as the Assassin’s Creed gameplay into a player-versus-player experience. The premise here is that all of the players are Abstergo trainees being taught in the ways of the assassins with networked Animus scenarios designed to help them better understand their enemy. This makes room for the biggest contrivance in the multiplayer, albeit a necessary one. Games can support up to eight players, and as such, there are eight different player models to choose from, and only one player can use one of those models in any given game. The levels are then populated exclusively with benign AI versions of the models, providing you with crowds of pedestrians to blend in with. There are a few different free-for-all and team-based configurations, but the basic idea is that everyone is given a specific player to assassinate, making everyone both hunter and hunted, creating this cat-and-mouse Mobius strip of sorts. In addition to your target’s face, you’re also provided with a simple directional compass that provides a general idea of where and how far away they are, though at a certain point you have to use your skills of observation to figure out who’s a real person and who’s AI. 

  
That you’re constantly looking over your shoulder creates a terrific sense of tension, and there’s an interesting dynamic of balancing how quickly to approach your target without betraying yourself as a live target to whoever’s hunting you. The whole thing’s outfitted with a persistent experience system that grants you rechargeable abilities like a brief sprinting boost and additional slots to have multiple abilities active at once. It probably ends up working better than it really ought to, and I’ll admit that I found it particularly satisfying to be able to shout “UH!” at my live opponent as I slipped a dagger into his ribcage, though it’s still more of a novelty than a serious multiplayer contender. There’s good variety in the small number of maps included, but there’s not many of them, and the four modes don’t cover much ground. More importantly, with the tools provided, it’s remarkably easier to hunt down your targets than it is to evade assassination, and it makes the experience feel a little lopsided.

 Brotherhood lacks that generational leap we saw from Assassin’s Creed to Assassin’s Creed II, but it more than makes up for that with a full-bodied single-player experience teeming with interesting gameplay additions and a risky multiplayer component. Even more surprising than the single-player experience is the addition of a competitive multiplayer game that, through some downright acrobatic contrivances, manages to make the cat-and-mouse core of the single-player work with live opponents. For fans of the series Brotherhood is no optional sidebar; this is as significant an entry as either of its predecessors. It’s also about as fun as Assassin’s Creed has ever been.