30 Jan 2014

Internet Protocol Suite

 

Internet Protocol Suite :-

                           The Internet protocol suite is the networking model and a set of communications protocols used for the Internet and similar networks. It is commonly known as TCP/IP, because its most important protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first networking protocols defined in this standard.
                            It is occasionally known as the DoD model, because the development of the networking model was funded by DARPA, an agency of the United States Department of Defense.
  
The internet Protocol Suite Contains some kind of Terminologies known as:

Domain :-

                       A domain name (for instance, "ittechnocrates.tk") is an identification string that defines a realm of administrative autonomy, authority or control on the Internet. Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any name registered in the DNS is a domain name.
                      Domain names are used in various networking contexts and application-specific naming and addressing purposes.In general, a domain name represents an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, a server computer hosting a web site, or the web site itself or any other service communicated via the Internet.

IP :-

                 The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet. IP, as the primary protocol in the Internet layer of the Internet protocol suite, has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers.
               For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information. Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974.the other being the connection-oriented Transmission Control Protocol (TCP). 
                      The Internet protocol suite is therefore often referred to as TCP/IP.The first major version of IP, Internet Protocol Version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol Version 6 (IPv6).

IPv4 :-

                      Internet Protocol Version 4 (IPv4) provides safeguards to ensure that the IP packet header is error-free. A routing node calculates a checksum for a packet. If the checksum is bad, the routing node discards the packet. The routing node does not have to notify either end node, although the Internet Control Message Protocol (ICMP) allows such notification.


IPv6 :-

                           The successor to IPv4 is IPv6. Its most prominent modification from version 4 is the addressing system. IPv4 uses 32-bit addresses while IPv6 uses 128-bit addresses. Although adoption of IPv6 has been slow, as of June 2008, all United States government systems have demonstrated basic infrastructure support for IPv6.

DNS :-

                The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates easily memorized domain names to the numerical IP addresses needed for the purpose of locating computer services and devices worldwide.
         DNS can be quickly updated, allowing a service's location on the network to change without affecting the end users, who continue to use the same host name. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs), and e-mail addresses without having to know how the computer actually locates the services.
        The Domain Name System is an essential component of the functionality of the Internet.The Domain Name System also specifies the technical functionality of this database service. It defines the DNS protocol, a detailed specification of the data structures and data communication exchanges used in DNS, as part of the Internet Protocol Suite.

 Subnet Mask :- 

                     The practice of dividing a network into two or more networks is called subnetting. All computers that belong to a subnet are addressed with a common, identical, most-significant bit-group in their IP address. This results in the logical division of an IP address into two fields, a network or routing prefix and the rest field or host identifier.
               The rest field is an identifier for a specific host or network interface. In IPv4 the routing prefix is also specified in the form of the subnet mask, which is expressed in quad-dotted decimal representation like an address. For example, 255.255.255.0 is the network mask for the 192.168.1.0/24 prefix.
               Traffic between subnetworks is exchanged or routed with special gateways called routers which constitute the logical or physical boundaries between the subnets.

 Default Gateway :-

             In computer networking, a gateway is a node (a router) on a TCP/IP network that serves as an access point to another network. A default gateway is the node on the computer network that the network software uses when an IP address does not match any other routes in the routing table.
             It is usually the IP address of the router to which your PC network is connected. In home computing configurations, an ISP often provides a physical device which both connects local hardware to the Internet and serves as a gateway. Such devices include DSL routers and cable routers.
            In organizational systems a gateway is a node that routes the traffic from a workstation to another network segment. The default gateway commonly connects the internal networks and the outside network (Internet). In such a situation, the gateway node could also act as a proxy server and a firewall.
            The gateway is also associated with both a router, which uses headers and forwarding tables to determine where packets are sent, and a switch, which provides the actual path for the packet in and out of the gateway.

 WINS:-

          Windows Internet Name Service (WINS) is Microsoft's implementation of NetBIOS Name Service (NBNS), a name server and service for NetBIOS computer names.
        Effectively, WINS is to NetBIOS names what DNS is to domain names — a central mapping of host names to network addresses. Like DNS, it is implemented in two parts, a Server Service (that manages the embedded Jet Database, server to server replication, service requests, and conflicts) and a TCP/IP Client component which manages the client's registration and renewal of names, and takes care of queries.

Respectfully Sources are:-
  • www.google.com
  • www.wikipedia.org

26 Jan 2014

Apache Hadoop

Introduction to Hadoop

    Apache Hadoop is an open source software framework developed by Doug Cutting and Mike Cafarella in 2005. It is an open source software framework for storage and large scale processing on data sets of clusters of commodity hardware. Hadoop is top-level project written in java and being built and used by global community of users and contributors worldwide. It is licensed under Apache License 2.0. The latest stable version 2.2 is released on 15th October, 2013.

Elaborating in Details

What is Apache Hadoop?

    Apache Hadoop is an open source cross platform software framework which is used to store and process large amount of data. It uses scalable and distributed data storage and processing. It runs on any standard hardware. Apache Hadoop supports Distributed File System for storage and processing of data.
    In a traditional non distributed architecture, you’ll have data stored in one server and any client program will access this central data server to retrieve the data. The non-distributed model has few fundamental issues. In this model, you’ll mostly scale vertically by adding more CPU, adding more storage, etc. This architecture is also not reliable, as if the main server fails, you have to go back to the backup to restore the data. From performance point of view, this architecture will not provide the results faster when you are running a query against a huge data set. In a Hadoop distributed architecture, both data and processing are distributed across multiple servers.


History of Hadoop

    The Hadoop framework is developed by cutting and cafarella. Cutting who is working in Yahoo at that time named it after his son’s toy elephant. It was originally developed to support distribution for Nutch search engine project. Originally Google started using the distributed computing model based on GFS (Google File system) and MapReduce. Later Nutch (open source web search software) was rewritten using MapReduce. Hadoop was branched out of Nutch as a separate project. Now Hadoop is a top-level Apache project that has gained tremendous momentum and popularity in recent years.

The Key Features of Hadoop:

Local Storage.
   When you run a query against a large data set, every server in this distributed architecture will be executing the query on its local machine against the local data set. Finally, the result set from all this local servers are consolidated.
Faster results.
     
    In simple terms, instead of running a query on a single server, the query is split across multiple servers, and the results are consolidated. This means that the results of a query on a larger dataset are returned faster.
Less expensive servers.
    You don’t need a powerful server. Just use several less expensive commodity servers as Hadoop individual nodes.
High fault tolerance.
     If any of the nodes fails in the Hadoop environment, it will still return the dataset properly, as Hadoop takes care of replicating and distributing the data efficiently across the multiple nodes.
Supports large amount of servers.
     A simple Hadoop implementation can use just two servers. But you can scale up to several thousands of servers without any additional effort.
Robust and platform independence
     Hadoop is written in Java. So, it can run on any platform.
Used for unstructured data.
      It is used for unstructured data, as user doesn’t know the full result of the query. The Hadoop is not replacement for RDBMS.
Distributed, Scalable and Portable File system.
      HDFS stores large files (typically in the range of gigabytes to terabytes) across multiple machines.

Components of Hadoop:

The Apache Hadoop framework is composed of the following modules:
  • Hadoop Common - contains libraries and utilities needed by other Hadoop modules
  • Hadoop Distributed File System (HDFS) - a distributed file-system that stores data on the commodity machines, providing very high aggregate bandwidth across the cluster.
  • Hadoop YARN - a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications.
  • Hadoop MapReduce - a programming model for large scale data processing.
All the modules in Hadoop are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are common and thus should be automatically handled in software by the framework. Apache Hadoop's MapReduce and HDFS components originally derived respectively from Google's MapReduce and Google File System (GFS) papers.
Beyond HDFS, YARN and MapReduce, the entire Apache Hadoop “platform” is now commonly considered to consist of a number of related projects as well – Apache Pig, Apache Hive, Apache HBase, and others.
For the end-users, though MapReduce Java code is common, any programming language can be used with "Hadoop Streaming" to implement the "map" and "reduce" parts of the user's program. Apache Pig, Apache Hive among other related projects expose higher level user interfaces like Pig Latin and a SQL variant respectively. The Hadoop framework itself is mostly written in the Java programming language, with some native code in C and command line utilities written as shell-scripts.





HDFS

HDFS stands for Hadoop Distributed File System, which is the storage system used by Hadoop. The following is a high-level architecture that explains how HDFS works.


The following are some of the key points to remember about the HDFS:
  • In the above diagram, there is one Name Node, and multiple Data Nodes (servers). b1, b2, indicates data blocks.
  • When you dump a file (or data) into the HDFS, it stores them in blocks on the various nodes in the Hadoop cluster.
  • HDFS creates several replication of the data blocks and distributes them accordingly in the cluster in way that will be reliable and can be retrieved faster.
  • A typical HDFS block size is 128MB. Each and every data block is replicated to multiple nodes across the cluster. Hadoop will internally make sure that any node failure will never results in a data loss.
  • There will be one Name Node that manages the file system metadata There will be multiple Data Nodes (These are the real cheap commodity servers) that will store the data blocks When you execute a query from a client, it will reach out to the Name Node to get the file metadata information, and then it will reach out to the Data Nodes to get the real data blocks.
  • Hadoop provides a command line interface for administrators to work on HDFS.
  • The Name Node comes with an in-built web server from where you can browse the HDFS file system and view some basic cluster statistics.

MapReduce

MapReduce is a parallel programming model that is used to retrieve the data from the Hadoop cluster.
  • In this model, the library handles lot of messy details that programmers doesn’t need to worry about. For example, the library takes care of parallelization, fault tolerance, data distribution, load balancing, etc.
  • This splits the tasks and executes on the various nodes parallely, thus speeding up the computation and retrieving required data from a huge dataset in a fast manner.
  • This provides a clear abstraction for programmers. They have to just implement (or use) two functions: map and reduce.
  • The data are fed into the map function as key value pairs to produce intermediate key/value pairs.
  • Once the mapping is done, all the intermediate results from various nodes are reduced to create the final output.
  • JobTracker keeps track of all the MapReduces jobs that are running on various nodes. This schedules the jobs, keeps track of all the map and reduce jobs running across the nodes. If any one of those jobs fails, it reallocates the job to another node, etc.
  • In simple terms, JobTracker is responsible for making sure that the query on a huge dataset runs successfully and the data is returned to the client in a reliable manner. TaskTracker performs the map and reduce tasks that are assigned by the JobTracker. TaskTracker also constantly sends a hearbeat message to JobTracker, which helps JobTracker to decide whether to delegate a new task to this particular node or not.

Supporting Evidence/Examples

Uses of Hadoop Today:

Yahoo!

On February 19, 2008, Yahoo! Inc. launched what it claimed was the world's largest Hadoop production application. The Yahoo! Search Webmap is a Hadoop application that runs on a more than 10,000 core Linux cluster and produces data that is used in every Yahoo! Web search query.
There are multiple Hadoop clusters at Yahoo! and no HDFS file systems or MapReduce jobs are split across multiple datacenters. Every Hadoop cluster node bootstraps the Linux image, including the Hadoop distribution. Work that the clusters perform is known to include the index calculations for the Yahoo! search engine.
On June 10, 2009, Yahoo! made the source code of the version of Hadoop it runs in production available to the public. Yahoo! contributes all the work it does on Hadoop to the open-source community. The company's developers also fix bugs, provide stability improvements internally and release this patched source code so that other users may benefit from their effort.

Facebook

In 2010 Facebook claimed that they had the largest Hadoop cluster in the world with 21 Pita Byte of storage. On June 13, 2012 they announced the data had grown to 100 PB. On November 8, 2012 they announced the warehouse grows by roughly half a PB per day.

Other users

As of 2013, Hadoop adoption is widespread. For example, more than half of the Fortune 50 uses Hadoop.

Contradictory Examples

  • The only benefit to using Hadoop is scaling. If you have a single table containing many terabytes of data, Hadoop might be a good option for running full table scans on it. If you don’t have such a table, avoid Hadoop. It isn’t worth the hassle and you’ll get results with less effort and in less time if you stick to traditional methods.
  • Hadoop does not have any conception of indexing. Hadoop has only full table scans. Hadoop is full of leaky abstractions.
  • With Indexing, Hadoop finds the data Slower in Comparison to SQL and PostgreSQL. It is only fast and useful when data is unstructured or stored in only one table.

Dispels the Contradictory Examples

Hadapt believed differently, and for the last 4 years Hadapt have been espousing a contradictory vision. Instead of viewing Hadoop and database systems as complimentary, Hadapt have viewed them as competitive, and have championed the idea of bringing high performance SQL to Hadoop in order to create a single system that can handle both structured and unstructured data processing. In 2008 Hadapt started building a system called HadoopDB that does exactly this, and by March 2009 Hadapt completed our initial prototype and submitted our work to VLDB. The work was accepted and published at VLDB, and Hadapt founded Hadapt2.0 shortly afterwards (in 2010) to productize this defiant vision.

Reaffirming the Trends

Hadoop is making its way into the enterprise, as organizations look to extract valuable information and intelligence from the mountains of data in their storage environments. The way in which this data is analyzed and stored is changing, and Hadoop has become a critical part of this transformation. 

Conclusion

  • Hadoop started out as an open source effort to replicate the system described in the MapReduce research paper that was published by Google in 2004.
  • It started gaining steam in 2006 and finally got adopted by several major Web enterprises for use in production in 2008.
  • By 2009 it became clear that Hadoop was going to be a major force to be reckoned with for processing unstructured data.
  • Between then and now, just about everybody in the industry has agreed that Hadoop and database systems were perfectly complementary.
  • Hadoop can be used for processing unstructured data, ETL-style transformations, and one-off data processing jobs, while database systems can be used for fast SQL access to structured data.
  • Data can be shipped between Hadoop and relational database systems over a connector. For example, a Hadoop job can be run to structure the data, after which it is sent to a relational database system where it can be queried using SQL.

Reference


22 Jan 2014

Properties of Cloud Computing

Properties of Cloud Computing

  

Introduction

Cloud Computing means to get and give services virtually via internet or shareable platforms. The Cloud Computing is used to share services and resources with whom who can’t afford the expense of that service or resources but can take it on partnerships with others who have these. Cloud Computing is extends beyond a single computer to a large enterprise. The idea of Cloud Computing is based on Centralized Computing infrastructure which was popular in 1950’s and 60’s. The Main Characteristics of Cloud Computing which make difference between Cloud Computing and Centralized Computing are as below...:-


ü  User Centric.
ü  No Up-Front Commitments
ü  Task Centric.
ü  On Demand Access
ü  Powerful.
ü  Nice Pricing
ü  Self-Healing.
ü  Simplified Application and Scalability
ü  Multi tenancy and intelligence.
ü  Efficient Resources Allocation
ü  Programmable.
ü  Energy Efficiency
ü  Flexible.
ü  Seamless Creation and Use of Third Party Services.

Elaborating in Details

Ø  User Centric:-
This Means when a user connect to the Cloud with proper authority than any of the data and resources such as hardware and software available there, becomes him/her property. It is entirely becomes user’s own application or environment.
Ø  Powerful:-
This Characteristics describes the large computational power on same platform with same resources and Large Storage Area. We can Share, Store and Upload on Same time with Different Places.
Ø  Self - Healing:-
The Cloud is threatened by lots of virus attacks and mass data crashes, so this characteristic allow us to save our data from this problems. There is always a Shadow copy or a hot backup of data stored, so if the data crashes then it retrieve the backup copy and heal itself. 
Ø  Multi tenancy and Intelligence:-
Multi tenancy means reusability, it allows the sharing of data and costs of services and storage among large amount of users. The Cloud Computing management allows the user to access only the data which he/she authorized, not others. So The Cloud Computing shows its intelligence in data access and operation is done by only proper authority holder user.
Ø  Programmable:-
Many processes associated with cloud computing are programmable, such as Login, Storing and Retrieving etc. The cloud can be programmed to store specific items such as images or documents only.
Ø  Flexibility:-
The Cloud Computing is flexible. It can change the behavior and appearance of itself as per user wants. User can modify settings of the Cloud Environment as per he/she wants to appear and doing all task.
Ø  No Up-Front Commitments:-
Here, Cloud Computing never does any up front commitments. Either User or the Service provider or may be both are from different places.  So they can’t meet face to face via cloud computing, still the Cloud Computing makes it easy via license agreement to use the services. This will be done by pay as you go services or virtual money such as PayPal, Visa etc.
Ø  On Demand Access:-
The user can access the data from the cloud from any place at any network location at any time with proper authorization and authentication. He/she can perform the data operation at any time according to their schedule. There is no place boundary or time boundary over user.
Ø  Simplified Application and Scalability:-
Most of applications such as office automation, Photo editing, Document Management, and Customer Relationship Management (CRM) software are replicated to service provider’s infrastructure and made more scalable and accessible by a browser on demand.
Ø  Efficient Resource Allocation:-
With cloud computing a user can access most of resources by provider. He/ She can access the resources at same time via cloud computing. The Resources are allocated to each user in such an advance and reusable manner so each user can access and make efficient use of resources.
Ø  Energy Efficiency:-
With Cloud Computing Energy efficiency is increased. The concept of Green Computing is introduced. Multiple users are connected to a single server and multiple resources can be allocated to a single user, so need of multiple mainframes and client computers to monitories the services are reduced.
Ø  Seamless Creation and Use of Third Party Services:-
The On demand access and service orientation creates a seamless new opportunities for creating new systems and application with flexibility by using sharable development platform and resources.



Conclusion

Now a days, Cloud Computing becomes vast field for IT developers and users. By Cloud Computing new application creation, publishing, and using is easy and fast. The data Access becomes on demand so we can generate copy of data at any place, time and device through a browser. The Cloud computing saves much more human efforts by sharing the resources, infrastructure, services and platform for users and developers. It saves energy by green computing and make seamless new creations by making new third party services and service sharing. Now it is easier to turn new ideas into products with low costs and high selling value with concentrating technical efforts on any platform.  

Reference

ü  Mastering Cloud Computing –Foundation and Application Programming,

 By R. Buyya, C. Vecchiola, S. Thamarai selvi.

10 Jan 2014

Terminologies of Cloud Computing…

Advertising-based pricing model
A pricing model whereby services are offered to customers at low or no cost, with the service provider being compensated by advertisers whose ads are delivered to the consumer along with the service.

Amazon EC2
Amazon's Elastic Compute Cloud Web service, which provides resizable computing capacity in the cloud so developers can enjoy great scalability for building applications.

Amazon S3
Amazon Simple Storage Services -- Amazon's cloud storage service.

CDN
Content delivery network -- A system consisting of multiple computers that contain copies of data, which are located in different places on the network so clients can access the copy closest to them.

Cloud
A metaphor for a global network, first used in reference to the telephone network and now commonly used to represent the Internet.

Cloud broker
An entity that creates and maintains relationships with multiple cloud service providers. It acts as a liaison between cloud services customers and cloud service providers, selecting the best provider for each customer and monitoring the services.

Cloud operating system
A computer operating system that is specially designed to run in a provider's datacenter and be delivered to the user over the Internet or another network. Windows Azure is an example of a cloud operating system or "cloud layer" that runs on Windows Server 2008. The term is also sometimes used to refer to cloud-based client operating systems such as Google's Chrome OS.

Cloud Oriented Architecture
A term coined by Jeff Barr at Amazon Web Services to describe an architecture where applications act as services in the cloud and serve other applications in the cloud environment.

Cloud portability
The ability to move applications and data from one cloud provider to another. See also Vendor lock-in.

Cloud provider
A company that provides cloud-based platform, infrastructure, application, or storage services to other organizations and/or individuals, usually for a fee.

Cloud storage
A service that allows customers to save data by transferring it over the Internet or another network to an offsite storage system maintained by a third party.

Cloudsourcing
Replacing traditional IT services with cloud services.

Cloudstorming
Connecting multiple cloud computing environments.

Cloudware
Software that enables creating, deploying, running, or managing applications in the cloud.

Cluster
A group of linked computers that work together as if they were a single computer, for high availability and/or load balancing.

Consumption-based pricing model
A pricing model whereby the service provider charges its customers based on the amount of the service the customer consumes, rather than a time-based fee. For example, a cloud storage provider might charge per gigabyte of information stored. See also Subscription-based pricing model.

Customer self-service
A feature that allows customers to provision, manage, and terminate services themselves, without involving the service provider, via a Web interface or programmatic calls to service APIs.

Disruptive technology
A term used in the business world to describe innovations that improve products or services in unexpected ways and change both the way things are done and the market. Cloud computing is often referred to as a disruptive technology because it has the potential to completely change the way IT services are procured, deployed, and maintained.

Elastic computing
The ability to dynamically provision and de-provision processing, memory, and storage resources to meet demands of peak usage without worrying about capacity planning and engineering for peak usage.

External cloud
Public or private cloud services that are provided by a third party outside the organization.

Google App Engine
A service that enables developers to create and run Web applications on Google's infrastructure and share their applications via a pay-as-you-go, consumption-based plan with no setup costs or recurring fees.

Google Apps
Google's SaaS offering that includes an office productivity suite, email, and document sharing, as well as Gmail, Google Talk for instant messaging, Google Calendar and Google Docs, spreadsheets, and presentations.


HaaS
Hardware as a service; see IaaS.

Hosted application
An Internet-based or Web-based application software program that runs on a remote server and can be accessed via an Internet-connected PC or thin client. See also SaaS.

Hybrid cloud
A networking environment that includes multiple integrated internal and/or external providers.

IaaS
Infrastructure as a service -- Cloud infrastructure services, whereby a virtualized environment is delivered as a service over the Internet by the provider. The infrastructure can include servers, network equipment, and software.

IBM Smart Business
IBM's cloud solutions, which include IBM Smart Business Test Cloud, IBM Smart Analytics Cloud, IBM Smart Business Storage Cloud, IBM Information Archive, IBM Lotus Live, and IBM LotusLive iNotes.

Internal cloud
A type of private cloud whose services are provided by an IT department to those in its own organization.

Mashup
A Web-based application that combines data and/or functionality from multiple sources.

Microsoft Azure
Microsoft cloud services that provide the platform as a service (see PaaS), allowing developers to create cloud applications and services.

Middleware
Software that sits between applications and operating systems, consisting of a set of services that enable interoperability in support of distributed architectures by passing data between applications. So, for example, the data in one database can be accessed through another database.

On-demand service
A model by which a customer can purchase cloud services as needed; for instance, if customers need to utilize additional servers for the duration of a project, they can do so and then drop back to the previous level after the project is completed.

PaaS
Platform as a service -- Cloud platform services, whereby the computing platform (operating system and associated services) is delivered as a service over the Internet by the provider.

Pay as you go
A cost model for cloud services that encompasses both subscription-based and consumption-based models, in contrast to traditional IT cost model that requires up-front capital expenditures for hardware and software.

Private cloud
Services offered over the Internet or over a private internal network to only select users, not available to the general public.

Public cloud
Services offered over the public Internet and available to anyone who wants to purchase the service.

SaaS
Software as a service -- Cloud application services, whereby applications are delivered over the Internet by the provider, so that the applications don't have to be purchased, installed, and run on the customer's computers. SaaS providers were previously referred to as ASP (application service providers).

Salesforce.com
An online SaaS company that is best known for delivering customer relationship management (CRM) software to companies over the Internet.

Service migration
The act of moving from one cloud service or vendor to another.

Service provider
The company or organization that provides a public or private cloud service.

SLA
Service level agreement -- A contractual agreement by which a service provider defines the level of service, responsibilities, priorities, and guarantees regarding availability, performance, and other aspects of the service.

Subscription-based pricing model
A pricing model that lets customers pay a fee to use the service for a particular time period, often used for SaaS services. See also Consumption-based pricing model.

Utility computing
Online computing or storage sold as a metered commercial service in a way similar to a public utility

Vendor lock-in
Dependency on the particular cloud vendor and difficulty moving from one cloud vendor to another due to lack of standardized protocols, APIs, data structures (schema), and service models.

Vertical cloud
A cloud computing environment that is optimized for use in a particular industry, such as health care or financial services.

Virtual private data center
Resources grouped according to specific business objectives.

VPC
Virtual private cloud -- A private cloud that exists within a shared or public cloud, e.g., the Amazon VPC that allows Amazon EC2 to connect to legacy infrastructure on an IPsec VPN.

Windows Live Services
Microsoft's cloud-based consumer applications, which include Windows Live Mail, Windows Live Photo Gallery, Windows Live Calendar, Windows Live Events, Windows Live Skydrive, Windows Live Spaces, Windows Live Messenger, Windows Live Writer, and Windows Live for Mobile.

Comments

© 2013-2016 ITTechnocrates. All rights resevered. Developed by Bhavya Mehta