Pages

Wednesday, November 28, 2012

Lowering the Startup Barrier to Disruption Through De-Centralization

Today we are honored to have Yermo Lamers, noted software developer and tech leader, as a guest blogger. Yermo has developped many industry leading software applications and has a deep background in communications software, data platforms, and command and control systems. In his "spare time" Yermo designed and developed a software platform for hosting socially aware portals, including his own at http://miles-by-motorcycle.com where you can follow his passion for motorcycling, and http://a-software-guy.com/ where he maintains a technology centric discussion board.
 
Network Speed

Lowering the Startup Barrier to Disruption Through De-Centralization

By: Yermo Lamers


 

We have a strong natural bias to keep using existing ideas that have served us well. Once established, changing our thinking is difficult. But the world around us changes relentlessly. Here-in are sewn the seeds of disruption.

Oftentimes, incremental and seemingly insignificant changes in technology have huge effects that are not immediately apparent. Network speeds, which have been incrementally improving for decades, are such a change. Sure, we can do things faster, but what does it mean?

Human beings making a request of an information system will typically start to get bored after a few seconds and frustrated after a few more. That's the benchmark we look for in getting a response. In the good 'ol days, network speeds were only fast enough to transmit a few raw characters over any distance in that time-frame. I remember as a little kid playing the game of Zork after hours using a Silent 700 thermal paper terminal connected using an acoustic coupler modem to a minicomputer at NASA. I think it could only transmit at 110 baud. You typed in a line, “Pick up sword”. You waited a second or two and then the thing started whirring a response back at you. That system could only transmit one line of characters at a time “fast enough” to match the human expectation. It was a natural consequence of these slow connection speeds that the world was ruled by dumb terminals connected to centrally located mini-computers and, on the high end, mainframes. Huge proprietary businesses with wide moats were built protected by this centralization.

Then one day, I heard about this impossibly fast new technology called Ethernet.  “10Mbits/s?” There are those who say the PC solely brought about the end of minicomputer and mainframe era. I would disagree and suggest that it was, in fact, Ethernet that was the key to disrupting their world. Ethernet was fast. But what did it mean? Ethernet meant you could now inexpensively hook commodity machines together to quickly distribute data in a way that was not possible before. Expensive minicomputers which used to be data store and computational powerhouse combined could now be replaced by commodity machines that acted essentially as nothing but a data store. Computation was offloaded to relatively inexpensive workstations. The world of client server was born and a whole new industry came along to disrupt the one before. Importantly, economies of scale made the knowledge to run these machines a commodity which lowered one of the big barriers to starting new businesses. Namely, talent was now available. In a way, it was increases in local area network speeds more than the PC itself that enabled the rise of Microsoft. Microsoft was able able to see ways to exploit the new context with fresh eyes, at IBM's expense, who were still caught in thinking that the centralized models that had worked before would continue to be competitive.

Microsoft built an empire on the local area network. They controlled the server and the workstation. What I never saw was how Microsoft's moat was tied to a particular Goldilocks zone of wide area network speeds.

It started some time in '93. I remember getting a US Robotics Dual Standard modem. It could talk to another modem of the same type at 14,400. What did this mean? It meant I could download a complete distribution of Linux in some reasonable time. As modem speeds increased, it became easier and easier for programmers to start distributing what they had written. The free software movement had been around for quite some time, but it was the advent of the high speed modem that, in my opinion, was key to it's rise. Just as I did, Microsoft failed to notice that this was a harbinger of things to come. Linux was not the threat; not by itself. Network speed was the real threat to it's business model.

As long as broadband speeds were low enough and it was impractical to distribute truly large quantities of data quickly, Microsoft's position was defensible. They still controlled the client and the server on the LAN. However, at some point, wide area network speeds became fast enough to disrupt Microsoft's stranglehold on business processes. Mired in old ideas one might think this is because it was easier to distribute large quantities of software thus threatening Microsoft's hold on distribution. I would argue that once network speeds became fast enough sometime in the early 2000's, it led to the ascendancy of business models that could not have succeeded before. The era of Google, Facebook and other web 2.0 companies was upon us and Microsoft's model of services centralized to the LAN was suddenly feeling antiquated. What did these new high speed network connections mean? Third parties somewhere out on the vast internet could now deliver experiences to users in the critical couple second window that rivaled the experiences delivered by local desktop software. The browser became the platform. This is actually what killed my little stock market software company. It turns out users hate installing software. They hate updating it. They hate not being able to use it where-ever when-ever they want to. With functionality delivered by Web 2.0 there's nothing for the user to install or update. They just log in and use the service on whatever device they want. The availability of these Web 2.0 services also had a positive effect on small business. It meant that the costs to start a new business had once again been reduced. A business could now start out without needing to run it's own administrative servers or even network. Just use Google Docs, online payroll, and accounting service to get off the ground. There little need to fund an expensive IT staff for internal operations anymore let alone pay Microsoft's licensing fees. Additionally, these services enabled a more mobile and distributed workforce. Most small businesses can't afford the infrastructure costs to make their internal Microsoft dominated networks available to a mobile workforce. Now with Web 2.0, they get a mobile enabled work-force for free. This in turn enables the business to look for talent where it happens to be regardless of whether it's local, across the country or around the world. The local area network that Microsoft dominated was another kind of centralized model disrupted by network speed increases.

Network speeds continue to increase and there's a new potentially larger disruption of a centralized model in the works. Network speeds are starting to go beyond “human response time” speeds and are beginning to reach what we can call “machine speed”.  Reaching a speed where a rich experience can be delivered to a user within their boredom threshold was the catalyst to disrupt one of the most successful businesses in history. What effect would it have if network speed increased to the point where in-machine and inter-machine data transfer rates become less distinguishable?

A core assumption in the last 40+ years of operating system design is that operating with the outside world is slow. Machines are distinct and services are centralized on the machine. I have my machine and you have yours and they run distinct operating systems. Even if I have a datacenter, each machine runs it's own OS.

If the network is so fast that I can call services between machines at something close to “machine speed” across the country or even around the world, it might imply the paradigm of single distinct machines starts looking dated. What new disruptive decentralization might occur?

Web 2.0 companies brought zero-install, zero-maintenance services to the people side of the business. For many companies, especially heavily online enterprises, there are still very significant costs associated with providing the core services of the business to it's users. There are servers to configure. There are programs to write and third party components to integrate. There's data base administration to do and patches to apply. There's also excess capacity which has to be built, maintained, paid for and left idle to handle the occasional unexpected surges in usage. Then there are all the very expensive salaries to pay. I've heard many entrepreneurs bemoan how difficult and expensive it is to find talent. Imagine being able to launch a new initiative quicker, with less overhead, less staff and less need for development and administration.

This new disruptive force is called the “cloud”. And by the cloud, I do not mean just virtual servers that you still have to administer. I mean the ability to distribute and auto-scale the components that have traditionally made up online software systems. This is called Platform as a Service and it does for software what Web 2.0 did for people. It radically decentralizes it.

With “Platform as a Service” (PAS) models, you no longer run any kind of server at all. You develop your application which represents your unique value proposition and then push it out to your “PAS Cloud” vendor. The vendor runs your application as just another process in their distributed network of machines. There's no administration and there is built in scalability. If you happen to get favorable press and suddenly need more capacity there's no need to wake up your IT staff, since you don't have one. You simply open your control panel and spool up additional instances of your application to match demand. There's no excess capacity for you to carry and pay for unnecessarily. Your operational costs are reduced. Your ability to respond is increased.

But more importantly, there is an ecosystem of third party online service components rapidly developing which are likely to shorten online software development cycles. There are features almost every online application requires such as image manipulation,  databases, messaging, e-commerce, tracking, reporting, and alerting available among many others. In the past, these services had to be developed or at least downloaded and installed on some server. Then they had to be maintained, patched and upgraded. Imagine, if such services were simply available online somewhere that you could simply sign up for and for nominal incremental cost just hook into your application and start using. Imagine that these components could scale based on demand. Sure, resizing a couple hundred images is no problem. But what if you run into hockey-stick adoption and need to scale a million tomorrow? Imagine how the act of building applications could become nothing more than hooking up distributed pre-built  scalable zero-administration services together.

The context has changed. Old ideas based on centralization are a competitive disadvantage. In this new context of faster network speeds, the cloud and Platform as a Service models, new initiatives can be built with less funding, less staff, less infrastructure and with a much shorter time to market while being more scalable and vastly more distributed.

What will it mean when networks become faster still? What kinds of hidden centralization that we don't even question now will be disrupted?

Thursday, October 11, 2012

Lessons learned from climbing and skills applied to business

Mike Moniz, our CEO here at Circadence recently did something that very few others have... he summited Mt. Everest. Here is an article Mike wrote describing how he applies lessons learned from climbing to business.

By Michael J. Moniz, CEO, Circadence

I am the co-founder, president and CEO of Circadence. We are the leading WAN optimization company relied on by organizations needing additional data transport speed, reliability and consistency. When not in the office, I am an avid alpinist. My son Matthew and I hold the world speed record for our ascent of the 50 highest mountains in the U.S. in 43 days. I have summited five of the Seven Summits, including Mt. Everest and am one of the few individuals to hold the distinction of summiting two 8,000 meter peaks (Mt. Everest 29,029 ft. and Lhotse 27,605 ft.) within 24 hours. If you think one has nothing to do with the other, CEOs, heads of companies and extreme athletes have more in common than most people think and the ability to translate lessons learned from the mountain to the boardroom and vice versa is paramount.

I believe that every significant challenge in life creates a better version of yourself. Those opportunities, threats and obstacles help people learn more about themselves, their team and environment. Effective communications is critical for climbing as well as business. Your life could be saved by listening to your climbing partner who alerts you of a dangerous area. When faced with a business decision, communicating the pros and cons, and collaborating with your colleagues will help with decision making and quick responses to any situation. And, as a leader, it’s my job to ensure the proper tools are in place to consistently evaluate situations or crises and map out a plan.
In high altitude mountaineering, every climb is different; therefore, it is critical to prepare and understand each one. Because of that, it is essential to do your homework - research each mountain, the weather and study the failures of others in order to be prepared. Training will also be different depending on the mountain. In business, it’s no different; we do our research and assess why companies have failed. Having that research in hand gives us a leg up as we work toward success.
Whether I am climbing the largest mountain in the world or completing a major business deal, I truly appreciate the challenge and do not take the risk of either lightly. What the mountains teach us is that if we don't try it, it never happens.

Friday, May 18, 2012

Don’t let BYOD become BMNP (Bring Me New Problems)

The Bring Your Own Device (BYOD) trend presents enterprises with a number of significant challenges to go along with the potential benefits. Certainly there are positives associated with letting employees use their own computing devices to access enterprise resources, especially mobile technologies such as tablets and smart phones, including reduced costs and enhanced productivity.

 There are also some rather serious complications as well though, including security, manageability, and control. For the medical industry there are additional compliance issues to be aware of and enterprises must ensure that regulatory standards continue to be met. The medical community also has to contend with unique elements including compliance with insurance practices and acceptable levels of legal risk.

A successful BYOD implementation strategy will have to include a rigidly enforced acceptable use policy. Within regulated industries including medical I believe that in order to enforce a solid acceptable use policy enterprises must maintain complete management control of the device, enterprise data, and enterprise connectivity.

Enforcing enterprise standards on mobile devices generally begins using Mobile Device Management (MDM) and Mobile Application Management (MAM) systems. MDM/MAM systems enable the enterprise administrators to remotely configure and provision devices, install applications, troubleshoot, administer, and secure the device, and if required remotely wipe the device in accordance with established policies.

Additionally, MAM platforms enable the enterprise to implement version control, patch enforcement, and administrative access to the applications that are installed on the remote device. MDM and MAM are essential if mobile devices are going to be allowed to access and utilize enterprise systems and are an essential tool in enforcing acceptable use policies.

Because of the level of control needed to effectively implement an access control policy that satisfies the legal, regulatory, and best practices requirements of the medical community it may prove difficult to broadly implement BYOD with existing user devices. For this reason some organizations are adopting a hybrid model where the enterprise develops a list of acceptable devices and then offers them to employees at a discounted price in conjunction with the acceptable use policy.

One of the biggest, and most publicly known, challenge of BYOD is ensuring the security of data on the device and when transmitting to and from the enterprise. Effectively securing enterprise data includes encrypting data both at rest and in motion. Securing data at rest includes encrypting the data on the local storage medium so that if an unauthorized user attempts to read it there will be nothing useable. For mobile devices it is best to encrypt the entire device. Data at rest security must also incorporate user access controls including effective passwords and enforced logins to the device including screen timeouts and logouts for inactivity.

Securing data in motion includes the use of VPN connections to the enterprise and the use of access controls to ensure that only authorized users and applications are able to connect. Although most mobile devices support establishing VPNs, there are enterprise specific capabilities available that provide enhanced management and control of the connection and access. By enforcing secure connections and access control in conjunction with encrypting the data on the device, a strong level of overall data assurance can be provided.

BYOD can be a useful and effective element in an enterprise’s overall plan; there are a number of precautions that should be taken though to ensure successful implementation. Sound policies, effective enforcement, and good communication between enterprise administrators and end users will go a long way towards getting the most out of any BYOD effort.

MVO Optimization Daisy Chains

The concept of Daisy Chains as it relates to Circadence MVO technology involves the ability to connect to multiple destination networks from a single client and optimize the delivery of content from each of the connected networks individually.  The Daisy Chain allows the client to utilize the Circadence MVO optimized path between multiple MVO appliances across networks (or across subnets within a single network) until it reaches the nearest point of access.  This serves a dual purpose by extending the WAN optimization data path  as well as the Circadence Link Resilience capability as far as possible (or as far as is necessary).

One way to envision this concept is to have an organization with end users connecting to applications and retrieving content that are located at more than one location across the enterprise. Typically this could include content located in a branch office as well as a headquarters location and perhaps a datacenter. In typical WAN Optimization deployment the end user or client side implementation would have to have individual tunnels established from the end point to each head-end location separately.  With Daisy Chaining enabled in MVO there is a single head-end configured in the end user client or client-side endpoint, only the first link in the chain needs to be configured.
To illustrate the concept assume a user is connecting via laptop to their enterprise while on the road. The user has applications which connect to servers located within the enterprise and is requesting content that is located at multiple locations within the enterprise extranet. The user’s device has the MVO client installed and configured to connect to the MVO head-end installation located at their home office, which is a branch of the larger corporation. Further assume that the applications and content required are located at: the user’s brand office, at the company headquarters, at a company datacenter, and at a cloud hosted location. The configuration and workflow would be the following: 


We can make the following assumptions about the network connection:
  1. The end user “A” is connecting to the public internet “B” via WiFi, Cellular Data, or wired Ethernet.
  2. The end user “A” has/has not established a VPN connection to the Branch Office “C”.
  3. The Branch Office “C” is connected by an IP network connection to Corp Headquarters “D”.
  4. The Corp Headquarters “D” is connected by an IP network connection to a Corp Datacenter “W”. 
  5. The Corp Datacenter is connected by an IP network connection to a private or public Cloud Service “E”.

The MVO client application installed on “A” is configured as a MVO Remote with a peer connection to the MVO Hub located at “C”. The Hub at “C” has a MVO Managed Traffic Definition “MTD” configured for applications and content located on network “C” (c.c.c.c). If “A” requests content located on the c.c.c.c network the MVO Remote client will divert the original IP request to the MVO process, which will then encode the original IP packets into the enhanced TMP and send the request via the TMP protocol to the MVO Hub located at “C”. The Hub at “C” will process the request from “A” and return the appropriate c.c.c.c content via the TMP connection from the MVO Hub process at “C” to the MVO Remote process at “A”. The MVO Remote process at “A” will then decode the TMP packets into the original IP and forward to the original requesting local client process.
The MVO instance located at “C” is configured as both a MVO Hub and a MVO Remote. The MVO Remote process is configured to connect to the MVO peer located at “D”. The Hub at “D” has a MVO Managed Traffic Definition “MTD” configured for applications and content located on network “D” (d.d.d.d). If “C” requests content located on the d.d.d.d network the MVO Remote process will divert the original IP request to the MVO process, which will then encode the original IP packets into the enhanced TMP and send the request via the TMP protocol to the MVO Hub located at “D”. The Hub at “D” will process the request from “C” and return the appropriate d.d.d.d content via the TMP connection from the MVO Hub process at “D” to the MVO Remote process at “C”. The MVO Remote process at “C” will then decode the TMP packets into the original IP and forward to the original requesting local client process.  Additionally, the MVO instance at “C” has been configured to support Daisy Chains. The Daisy Chain configuration allows the MVO Hub instance to internally forward IP packets destined for a network which has MTD rules applied at the same MVO instance’s Remote process from a distant MVO Hub. With Daisy Chains enabled, if “A” requests content located on the d.d.d.d network the MVO Remote client will divert the original IP request to the MVO process, which will then encode the original IP packets into the enhanced TMP and send the request via the TMP protocol to the MVO Hub located at “C”. The Hub at “C” will process the d.d.d.d request from “A” and forward the request for content from d.d.d.d to the MVO Remote process at “C”. The MVO Remote process at “C” will send the request via the TMP protocol to the MVO Hub located at “D”. The Hub at “D” will process the request from “A” and return the appropriate d.d.d.d content via the TMP connection from the MVO Hub process at “D” to the MVO Remote process at “C”. The Daisy Chain enabled MVO instance at “C” will then internally forward the d.d.d.d content from the “C” Remote process to the “C” Hub process for transport to “A”. The MVO Remote process at “A” will then decode the TMP packets into the original IP and forward to the original requesting local client process at “A”.

The MVO instance located at “D” is also configured as both a MVO Hub and a MVO Remote. The MVO Remote process is configured to connect to the MVO peer located at “W”. The Hub at “W” has MVO Managed Traffic Definition “MTD” configured for applications and content located on network “W” (w.w.w.w) and Cloud Service “E” (e.e.e.e). If “D” requests content located on the w.w.w.w or e.e.e.e network the MVO Remote process will divert the original IP request to the MVO process, which will then encode the original IP packets into the enhanced TMP and send the request via the TMP protocol to the MVO Hub located at “W”. (The process for content from either w.w.w.w or e.e.e.e is fundamentally similar so only w.w.w.w will be detailed) The Hub at “W” will process the request from “D” and return the appropriate w.w.w.w content via the TMP connection from the MVO Hub process at “W” to the MVO Remote process at “D”. The MVO Remote process at “D” will then decode the TMP packets into the original IP and forward to the original requesting local client process.  Additionally, the MVO instance at “D” and at “W” have been configured to support Daisy Chains. The Daisy Chain configuration allows the MVO Hub instance to internally forward IP packets destined for a network which has MTD rules applied at the same MVO instance’s Remote process from a distant MVO Hub. With Daisy Chains enabled, if “A” requests content located on the w.w.w.w network the MVO Remote client will divert the original IP request to the MVO process, which will then encode the original IP packets into the enhanced TMP and send the request via the TMP protocol to the MVO Hub located at “C”. The Hub at “C” will process the w.w.w.w request from “A” and forward the request for content from w.w.w.w to the MVO Remote process at “C”. The MVO Remote process at “C” will send the request via the TMP protocol to the MVO Hub located at “D”. The Hub at “D” will process the w.w.w.w request from “A” through “C” and forward the request for content from w.w.w.w to the MVO Remote process at “D”. The MVO Remote process at “D” will send the request via the TMP protocol to the MVO Hub located at “W”. The Hub at “W” will process the forwarded request from “A” and return the appropriate w.w.w.w content via the TMP connection from the MVO Hub process at “W” to the MVO Remote process at “D”. The Daisy Chain enabled MVO instance at “D” will then internally forward the w.w.w.w content from the “D” Remote process to the “D” Hub process for transport to “C”. The Daisy Chain enabled MVO instance at “C” will then internally forward the w.w.w.w content from the “C” Remote process to the “C” Hub process for transport to “A”. The MVO Remote process at “A” will then decode the TMP packets into the original IP and forward to the original requesting local client process at “A”.
Functionally the Daisy Chain process works as an inherent function of MVO being able to operate with both Frontend (Remote) and Backend (Hub) processes running simultaneously. In a network consisting of a MVO Remote “A”, first MVO Hub “B”, and second MVO Hub “C”: As traffic from a Frontend process at Remote “A” arrives at the Backend process of the Hub “B” if it does NOT meet a MTD rule being diverted by the “B” Hub’s Frontend process it is exited from the MVO application at that location. If the traffic from the Remote meets the definition of a MTD divert running on the Frontend process at the “B” Hub it is reprocessed and sent along to the upstream “C” Hub. The configuration is best approached in reverse, from the last link in the chain to the first remote client.

Imaging Economics Interview Q&A


What is it that WAN optimization products do?
WAN Optimization products such as Circadence’s MVO platform substantially increase the performance of applications which use the network. With Circadence MVO file transfers, including medical images, are much faster and their delivery is ensured. By implementing MVO WAN Optimization, it’s possible to increase transfer speeds by more than 300 percent. For example, using Circadence, one healthcare organization was able to reduce the time required to complete image transfers from almost four minutes down to 13 seconds.

MVO WAN optimization significantly prevents corrupt, incomplete or lost files transferred across the network. Increasing data integrity and reliability improves the quality of healthcare provided by medical imaging institutions and limits the number of resends that are typically required. Implementing optimization maximizes return on investment in bandwidth and decreases or eliminates the need for expensive infrastructure upgrades.

Can you give the bird’s eye view of Circadence MVO for Mobile in a health care setting?
Circadence MVO for Mobile provides full end-end optimization between applications and content. In the healthcare setting, this could include Hospitals; remote clinics; distributed call centers; remote offices; and, individual users on laptops, tablets and smartphones.  In healthcare, typical applications that MVO will be utilized for are EMR/EHR and image study transfers such as with PACS, where MVO facilitates fast and effective patient care and increases the provider’s capabilities. 

What sets Circadence MVO for Mobile apart from similar solutions?
Circadence MVO is currently the only WAN Optimization solution that supports MS Windows, Android, and Apple iPhone/iPads.  Circadence is the most innovative mobile optimization provider and offers the most deployment options available including cloud, hardware, software, VM, and 3rd party integration with the MVO SDK. Circadence MVO also offers leading performance transferring all types of image studies and enables full optimization without caching or storing content or modifying it in any way.

Highlights:
- OEM integration with 3rd party platforms and applications, such as PACS imaging systems;
Broadest deployment capabilities, reducing infrastructure costs and accelerating ROI;
Full support for all mobile platforms including Windows, Android, and Apple iOS;
Optimizes all data dynamically in real-time, without caching or modifying;
Circadence patented Link Resiliency maintains application session persistence;

In recent years, how has the emergence of mobile devices changed the health care environment, specifically the radiology department?
The emergence of mobile devices, specifically high performance tablets with high definition displays and strong graphics performance, has enabled radiology professionals to “untether” themselves from the imaging systems. Essentially, mobility enables practitioners to be closer to their patients, provide a higher level of care and greatly increase efficiency. As the capabilities of the mobile platform increase, the demands placed on the network infrastructure increase substantially. The size of image studies increase near exponentially as they increase in definition, making WAN Optimization essential for effective use. The capabilities mobility brings to radiology also are increasing rapidly and in many ways are creating positive changes in the dynamic between caregiver and patient. Mobile access to patient studies enables healthcare professionals to be closer to their patients, and to being the technology to the patient versus forcing the patient to come to the technology.

How can radiology practices utilize WAN optimization? What are some of the benefits?
WAN Optimization such as Circadence MVO is essential for delivering radiology studies across networks. The broad deployment options offered by Circadence enable radiology practices to implement optimization in ways that best suit their particular practices. Distributed radiology offices can use WAN Optimization to provide much higher performance transferring content between offices. Centralized radiology providers can dramatically increase the number of studies their radiologists are able to read, increasing revenue and decreasing costs. Mobile WAN Optimization enables practitioners to access even larger, high definition studies from wherever they happen to be, improving patient care and increasing access.  

Does the Circadence MVO for Mobile have to comply with HIPAA guidelines? If so, how is that achieved?
Circadence MVO can be a key component of an organization’s HIPAA compliance program. Under HIPAA, healthcare organizations must ensure that patients’ privacy is protected; this includes protecting confidential information. Unlike other WAN Optimization vendors, the Circadence MVO Optimization platform does not cache, store, or decrypt information sent across the network. Circadence MVO enables healthcare providers to implement the leading WAN Optimization platform across their organizations while maintaining compliance with current guidelines.

What are some of the difficulties when installing/utilizing the Circadence WAN optimization solution?
As with any fast changing landscape, there can be challenges as organizations adapt. The healthcare industry in particular is facing an extraordinary amount of change in the way that healthcare services are provided and accessed, with new capabilities being announced daily. For an industry as regulated as healthcare the challenges can be administrative and procedural as much, or more so, than technical. Additionally, the increasing focus on technology is forcing medical practitioners at all levels to become more familiar with systems, terminology, and practices that may be entirely new to them. As regulators, providers, and practitioners become more educated and comfortable with technology the pace of adoption is increasing and administrative barriers to adoption of emerging technologies are being removed. Circadence places an emphasis on making the MVO optimization platform as easy to implement and manage as possible, both technically and administratively, allowing providers to focus more on providing high quality services.

Mobile Usage

Mobile usage is rising dramatically, placing greater strain on networks and creating bandwidth and connectivity issues. Exacerbating this situation are the high demands mobile users place on networks.

With the increased popularity of smartphones, tablets and other mobile devices, the boundaries between work and personal are blurring. As a result, people are using mobile devices to check work email, review and work on documents and a host of other work-related functions.

 As more and more critical information is accessed outside of brick and mortar, wireless and broadband companies need to plot the best way to accommodate the spike in traffic. Enterprises must also be prepared to help their employees remain connected regardless of their location or device.

The rise of mobile

The number of mobile users has sharply increased in recent years. As of June 2011, according to CTIA, a non-profit organization supporting the wireless industry in the United States, there were 322.9 million wireless subscriber connections. This represents a 745 percent increase from five years ago, when there were 38.2 million wireless subscriber connections. There are seven billion connected devices worldwide and by 2025, it is predicted that number will have ballooned to 50 billion. 

Consumers rely on their phones and mobile devices for a wide variety of purposes such as listening to music, downloading apps, surfing the Internet and watching videos. More people are also using personal devices for work-related functions such as managing email and working on documents. Unlike texting or making a phone call, this usage places an incredible strain on networks, consuming large amounts of data.

The surge in data usage coupled with the rise in BYOD (Bring Your Own Device) threatens to overwhelm the infrastructure supporting mobile devices. As a result, some carriers have discontinued offering unlimited data usage offers. In the enterprise, companies are increasingly adopting WAN optimization to ensure they can quickly transmit and process information even in areas with low-quality or intermittent network connections.

Unfortunately, it is not feasible to physically expand the wireless infrastructure in the United States by for instance, building more cell phone towers. And the cost of doing so would be astronomical.

In this challenging environment, service providers want the ability to more efficiently utilize the bandwidth available. At the same time, consumers have grown accustomed to being able to use their phones for work and fun. The always-on worker armed with a laptop, tablet and phone is quickly becoming the norm. In response, enterprise organizations are also seeking a way to accommodate additional devices.

WAN optimization goes mobile

 WAN optimization, once reserved for military organizations looking to move critical information in the field, is gaining momentum in the enterprise and could play a critical role in helping alleviate mobile network overload. Loosely defined, WAN optimization is a collection of technologies and techniques used to maximize the efficiency of data flow across a wide area network. WAN optimization enables organizations to transmit information and gain access to critical applications faster.

 The enterprise is increasingly adopting WAN optimization as a method for transmitting information quickly and securely. For example, healthcare organizations using WAN optimization can quickly transmit images and patient data quickly regardless of network connections or bandwidth strength.

 For companies and organizations seeing an influx of mobile workers and personnel utilizing personal devices, mobile WAN optimization provides the same benefits as WAN optimization but to mobile devices. This enables organizations to embrace remote workers and mobile workers, confident that they connect regardless of their location. WAN optimization can also help alleviate pressure on existing networks. For example, carriers can provide the same level of service but use 1/6 of the infrastructure to deliver it.

Conclusion

As mobile devices continue to become more sophisticated and mobile device adoption continues to explode, pressures on the current networks will continue to mount. BYOD and the mobile worker are also creating issues for organizations and companies who want to ensure their employees are able to quickly and securely access the information they need to perform their duties regardless of their location.

In this atmosphere, it’s critical that carriers and organizations look for innovative ways to increase the effectiveness of their existing bandwidth.  If the current mobile adoption continues at the current pace, soon they’ll have no choice but to embrace new solutions to address the situation. By embracing new technologies and solutions such as mobile WAN optimization, organizations will able to ensure that the mobile workforce isn’t left out in the cold.

Wednesday, April 20, 2011

Cloud, Mobile, and why Network Optimization is essential

When discussing "Cloud Computing" and "Mobile Computing" it's starting to get difficult to have a discussion about one without in some way talking about the other. Also, I think they may turn out to be the endpoints of the same organism (we'll let the terminal equipment manufacturers and cloud providers sort out which is the head end and which is the...). Before getting too deep allow me to standardize on some definitions around the terms "Cloud" and "Mobile" for the purposes of this discussion, and take giant, complex, difficult concepts and boil them down to easily digested techno porridge:

"Cloud Computing" or just "Cloud": We'll go ahead and use the NIST definition here: Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. -NIST.gov – Computer Security Division – Computer Security Resource Center". Csrc.nist.gov

"Mobile Computing":  Taking a computer and all necessary files and software out into the field. Definition courtesy of US Bureau of Land Management http://www.blm.gov/wo/st/en/prog/more/bea/Glossary.html#m

I'm intentionally using broad definitions of both Cloud and Mobile because that is exactly what our customers and partners are doing. What specifically a person means by either "Cloud" or "Mobile" is going to depend on a wide variety of factors including location, application, public or private sector, industry, role, etc. The above definitions do an admirable job of rolling the multitude of variants up into an easy generalization that captures the essence of each. They are also at a high enough level that the intrinsic tie between Cloud and Mobile should be readily apparent. We can highlight the synergy by reducing the definitions one step further:

"Cloud" : Easy, on-demand access to applications and content hosted somewhere else.
"Mobile": Everything you need to run applications and look at content wherever you are.

A match made in heaven. And a headache in the making if ever there was one. Although it's easy to see where Cloud Computing and Mobile are closely aligned, some may miss the remarkably important piece that's missing. If the apps and content are hosted somewhere else, and you're accessing them from wherever you are....HOW are you accessing them? Via a network of course. But, which network? Does it have enough bandwidth for your needs, is it all yours or shared, is it safe and secure, do you know anything about it at all? The network is an essential element for both Cloud and Mobile, neither will work without one, but is a little difficult to define well here. We'll call it the "network", now giving us:

"Cloud" : Easy, on-demand applications and content hosted somewhere else.
"Mobile": Using applications and looking at content wherever you are.

"Network": How Mobile things connect to Clouds.

The fantastic opportunities that Cloud and Mobile have to offer are only accessible if there is a network connecting them. The network must be sufficient to carry the content users are requesting, consistent enough for transactions to be completed, and assured enough for reliable connectivity and data delivery. Because we've already defined Cloud as being "somewhere else" but haven't said where, and we've defined Mobile as being "wherever you are" but not where or how, the actual characteristics of the network remain unknown. This are exactly the conditions that users face every day from wherever they are. The vast majority of users connecting to applications or content do not have a detailed knowledge of the network they are using to connect themselves to the computing platforms and content storage they are accessing. If those users are connecting from anywhere other than their workplace, they may know their connection only by the marketing brand delivering it: Verizon Wireless, WiFi by AT&T, Boingo, T-Mobile Hotspot. Clearly the network is the critical bridge and if the user can't provide actionable information related to the character and capabilities of the connection we must rely on platforms, applications, or content to provide it without user input. This is the principle reason why WAN Optimization technology is essential for Cloud Computing and Mobile, both require network connectivity in order to function and neither can rely on user input or single-sided configurations to ensure successful and efficient operation.

Circadence WAN Optimization for Mobile http://bit.ly/fjXYt4 and Cloud http://bit.ly/i7O0Oy