March 15, 2009

Contemporary Hardware platform trends




  • Grid Computing

- Connects geographically remote computers into single network to create virtual super computer by combining the computational power of all computers on the grid.

- Most CPU is used on an average 25% of time leaving idle for rest of time.

- Grid computing is possible due to high speed internet connections (economic)

- Grid computing depends on software to divide and apportion pieces of a program among several computers, sometimes up to many thousands.

- Client sw communicates with server sw application.

- Server sw breaks data into chunks that are parcled out to grid machines.

Client machine can perform their tasks while running grid application in background.

Example:

Royal Dutch/Shell Group

1024 servers running Linux – largest linux supercomputer

Grid adjust to accommodate fluctuating data volumes that are typical in seasonal business

  • ON DEMAND Computing

- Refers to firm’s offloading peak demand for computing power to remote, large scale data processing centers

- Firm can reduce investment in IT infrastructure

- “Utility Computing” - suggests that firms purchase computing power from central computing utilities and pay only for amount of computing power they use similar to electricity

- Annual traffic surge for certain firm on seasonal occasions

  • Automatic Computing

- Industry wide effort to develop systems that can configure themselves, optimize and tune themselves, heal themselves when broken and protect themselves from outside intruders and self destruction.

  • Edge Computing

- It is multitier, load-balancing scheme for web based applications in which significant part of website content, logic, and processing are performed by smaller, less expensive servers located nearby users.

- There are three tier in edge computing – local client, near by edge computing platform and enterprise computer located in firm’s data center

- Request from user client computer are initially processed by edge computer

Business benefit of edge computing:

1. technology cost are lowered – no need to purchase infrastructure at its own data center

2. service level are enhanced for users – less time

3. flexibility of firm is enhanced coz it can respond to business opportunities quickly


  • Cloud Computing

Cloud computing is Internet ("cloud") based development and use of computer technology ("computing"). It is a style of computing in which typically real-time scalable resources are provided “as a service” over the Internet to users who need not have knowledge of, expertise in, or control over the technology infrastructure ("in the cloud") that supports them.

It is a general concept that incorporates software as a service (SaaS), Web 2.0 and other recent, well-known technology trends, in which the common theme is reliance on the Internet for satisfying the computing needs of the users. An often-quoted example is Google Apps, which provides common business applications online that are accessed from a web browser, while the software and data are stored on Google servers.

The cloud is a metaphor for the Internet, based on how it is depicted in computer network diagrams, and is an abstraction for the complex infrastructure it conceals.

January 13, 2009

TCP/IP





Introduction to TCP/IP

Many people may not know what TCP/IP is nor what its effect is on the Internet. The fact is, without TCP/IP there would be no Internet. And it is because of the American military that the Internet exists.During the days of the cold war, the defense department was interested in developing a means of electronic communication which could survive an attack by being able to re-route itself around any failed section of the network.They began a research project designed to connect many different networks, and many different types of hardware from various vendors. Thus was the birth of the Internet (sorta). In reality, they were forced to connect different types of hardware from various vendors because the different branches of the military used different hardware. Some used IBM, while others used Unisys or DEC.

IP is responsible for moving data from computer to computer. IP forwards each packet based on a four-byte destination address (the IP number). IP uses gateways to help move data from point “a” to point “b”. Early gateways were responsible for finding routes for IP to follow.

TCP is responsible for ensuring correct delivery of data from computer to computer. Because data can be lost in the network, TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.

How TCP/IP works

Computers are first connected to their Local Area Network (LAN). TCP/IP shares the LAN with other systems such as file servers, web servers and so on. The hardware connects via a network connection that has it’s own hard coded unique address – called a MAC (Media Access Control) address. The client is either assigned an address, or requests one from a server. Once the client has an address they can communicate, via IP, to the other clients on the network. As mentioned above, IP is used to send the data, while TCP verifies that it is sent correctly.

When a client wishes to connect to another computer outside the LAN, they generally go through a computer called a Gateway (mentioned above). The gateway’s job is to find and store routes to destinations. It does this through a series of broadcast messages sent to other gateways and servers nearest to it. They in turn could broadcast for a route. This procedure continues until a computer somewhere says “Oh yeah, I know how to get there.” This information is then relayed to the first gateway that now has a route the client can use.

How does the system know the data is correct?

As mentioned above, IP is responsible for getting the data there. TCP then takes over to verify it.

Encoded in the data packets is other data that is used to verify the packet. This data (a checksum, or mathematical representation of the packet) is confirmed by TCP and a confirmation is sent back to the sender.

This process of sending, receiving and acknowledging happens for each individual packet sent over the Internet.

When the data is verified, it is reassembled on the receiving computer. If a package is not verified, the sending computer will re-send it and wait for confirmation. This way both computers – both sending and receiving – know which data is correct and which isn’t.

One nice thing about this protocol is that it doesn’t need to stick to just one route. Generally, when you are sending or receiving data it is taking multiple routes to get to its destination. This ensures data accuracy.

Just the facts:

TCP/IP addresses are based on 4 octets of 8 bits each. Each octet represents a number between 0 and 255. So an IP address looks like: 111.222.333.444.

There are 3 classes of IP addresses:

Ranges starting with “1” and ending with “126” (i.e.. 1.1.1.1 to 126.255.255.254) are Class A

Ranges starting with “128” and ending with 191 (i.e.. 128.1.1.1 to 191.255.255.254) are Class B

Ranges starting with 192 and ending with 254 (i.e.. 192.1.1.1 to 254.255.255.254) are Class C ( You will notice that there are no IP addresses starting with “127”. These are reserved addresses.)