Latest Technologies, Industry Trends & Best Practices

Open vs. Proprietary Systems: Open Software Standards


In this 3rd installment in making the case for Open Systems Computing (OSC), we will be discussing Open Software Standards. We will specifically define the terms and point out the advantages of implementing solutions that adhere to Open Software Standards.

For the strict purposes of this blog post – we shall define Open Software Standards as software that adheres to the following criterion – the software utilizes, 1) internationally accepted standard protocols, and 2) standard Application Programming Interface’s (APIs) that are published. If a software solution meets the above criterion, we will say it conforms to Open Software Standards.

Why is it important for a software solution to utilize standard protocols? First, let’s define protocol. A protocol is a set of rules that endpoints use to communicate. So, if a software/hardware solution utilizes standard protocols, also called public or open protocols, other software/hardware solutions will understand the rules and will be able to establish communication. On the other hand, if software uses proprietary protocols, also called closed or private protocols, it will be difficult, or impossible, for another software/hardware solution to communicate. Standard protocols help ensure communications between applications by using a standard response layer and common transport. Internationally accepted standard protocols are protocols that have been accepted and approved by international body, i.e. The Internet Engineering Task Force (IETF), etc.

An API – may be defined as the implementation of a protocol, or a set of protocols.  Publishing APIs is merely making them publicly available. An API is a library of code used to implement a protocol. APIs may be either closed (private) or open (public).  An open API is one that is publicly available – also known as a standard API. The utilization of standard APIs by one software solution enables other software solutions to know how to communicate with it – thus enhancing its integration ability. Standard protocols always have APIs that implement them, but not all APIs implement standard protocols – some implement proprietary protocols. So to implement a standard API, a software solution must utilize standard protocols. 

Some examples of internationally accepted standard protocols are; SIP (Sessions Initiation Protocol), HTTP (Hypertext Transfer Protocol), SOAP (Simple Object Access Protocol), and FTP (File Transfer Protocol) – all of which are application layer protocols that utilize TCP/IP at the transport level. By publishing APIs, a company is committing to stability – where internal code may change, the APIs will need to be well defined and rarely altered, thus ensuring the long term viability of the API and its integration potential.

Following are some advantages to selecting a partner that adheres to Open Software Standards – and in particular Standard Protocols and Internationally Accepted Standard APIs.


No Vendor Lock-in: No one vendor can meet all the needs of every corporate customer in every application area. Implementing an environment of Standard APIs and Protocols will allow for the seamless integration of best of class 3rd party software/hardware solutions. Customers will be liberated and free to choose the best of class application that meets their unique needs regardless of vendor and they will not “locked-in” to a single vendor who uses proprietary APIs and protocols and holds them hostage – and in all likelihood – charges them a fortune.

Ease of External and Internal System Integration: Utilizing standard protocols and APIs will ensure customers ease of application integration both with internal and external applications. At least in so far as standard protocols and APIs are being used on all internal and external applications.

Lower Total Cost of Ownership (TCO): When customers are free to choose the particular application that meets their customized needs, regardless of vendor, the cost of development and integration will be greatly reduced. For instance – there will be no need for extensive integration services. In addition, for non-proprietary vendors to keep customers they need to compete in the even ball field created by standard protocols and APIs, and as mentioned in item one, they cannot hold customers hostage.

Ease of Migrations: When utilizing a vendor that adheres to standard protocols and APIs – migrating from one vendor to another is more seamless. Integration services, if any, will be minimized.

Greater Solution Resiliency: If software adheres to widely accepted standard protocols and APIs – it is implied that testing and quality assurance have take place to ensure compliance with those standards. This should mean that the solution has been thoroughly tested for Software stability and resiliency.

Higher Customer Satisfaction: Customer satisfaction is increased in an environment where open software standards are adhered to because vendors need to compete based on features and quality, rather than relying on the vendor lock-in concept mentioned above.

Innovation is Encouraged: Innovation is encouraged and increases when vendors compete. Competition increases with the implementation of internationally accepted standard protocols and APIs. Therefore, innovation increases when internationally accepted standard protocols and APIs are utilized.

By implementing internationally accepted standard protocols and APIs, software vendors are complying with Open Software standards to the benefit of customers, vendors, and industries. A true win-win situation.

Open vs. Proprietary Systems: Portability

Continuing the case for Open Systems Computing (OSC), and since we discussed Interoperability in the first blog post, we will focus on Portability today. 

Reiterating the definition we are using for OSC, it may be defined as those computer systems offering: 1) interoperability, 2) portability, and 3) open software standards. 

Portability, similar to interoperability, usually refers to how well “new” software products work with “existing,” or “legacy,” software and/or systems in a corporate environment. Which begs the question: what does it mean to “work well” within the corporate environment? Precisely this, almost any software product can be made to work with any other software or hardware product (I understand this is a bit of a stretch, but bear with me). The heart of determining a high level of portability versus a low level of portability for a software product is how much modification is necessary to either the “new” software being interjected into the existing corporate environment or the “legacy” software, which is already operational in the environment, to make them work well together and function properly. If no, or very little development is needed to integrate the “new” software with the “existing” software and/or system, then you are assured the “new” software can be termed to be highly portable. If an extensive re-work of either the existing software or the new software is needed, then the “new” software could be termed to be less portable but only if the fault lies with the “new” software. Let me explain.

The term portability, by definition, embodies the juncture of both “existing” and “new” software being introduced into the corporate environment. However, while it is true that if “new” software integrated with ease into the existing infrastructure can be termed highly portable, the reverse is not necessarily true. The level of ease of integration is not only dependent upon the portability of the “new” software product, but also the openness and non-proprietary nature of the corporate environment as it exists prior to the introduction of the “new” software. In other words, if you have a completely closed proprietary system and you are introducing “new” software into it, the integration of the two will only work with significant modification to the “legacy” system, e.g. perhaps emulation software. It would be very unfair to tag the “new” software as less portable. In this instance the culprit is not necessarily the “new” software, it is the corporate environment built upon non-standard protocols and non-standard APIs.

As we continue on our journey of OSC, my next post will address the third interdependent leg: Open Software Standards. Stay tuned and thanks again for joining us.


Open vs. Proprietary Systems: Interoperability

As I pondered what topic would be best to launch my participation in the Startel blog, I decided I would make the case for Open System Computing (OSC). While it is a large and complex topic, it is also a very important one. Why is OSC important? The succinct answer is because OSC, 1) encourages innovation, and 2) keeps cost down for consumers of technology – both corporate and end users. As it will inevitably take several blog posts to cover this extensive topic, I will attempt to break up the topic into more easily consumable bitesize pieces. Today’s post will focus on a small piece of OSC called Interoperability.

Open systems computing and the role of interoperability can be understood with the following example. Three months ago I bought a new Apple MacBook Pro, which cost nearly $4,000. About two weeks ago our Controller bought a new PC (Dell laptop) from Best Buy for $399 with Windows 8 and the latest Intel chip. Although these are “work” computers, why would I bring up what looks like a Business-to-Consumer (B2C) situation on a Business-to-Business (B2B) blog? To make this point: Why did my Mac laptop cost over 10x more than the Dell laptop? Sure, I got solid-state flash drives and 16 GB of memory while our Controller got 1GB of memory, but mine was 10x more in price? With her new Dell laptop she can add more memory – up to 16GB and it still would not make a dent in the price difference between her PC and my Mac. More to question is, why did I do it? Why did I pay 10x more? Well, I have been a Mac user for over 15 years; all my children and my wife use Macs and we have a Mac Network at home. I also like the look and feel, the ease of use, etc. But for the first time in 15 years I am questioning my decision to stick with Apple. After seeing Windows 8 functionality and testing Microsoft’s new Surface Pro (as Microsoft says, “a powerful PC in a tablet form”), I realize that the PC has better customization features. For instance, I cannot even upgrade RAM on a Mac. And the real kicker: Interoperability.

Interoperability is simply the ability of diverse systems to operate together – to work together. The term is usually used within the context of the corporate ecosystem. For instance, if your HP Printer works well with your IBM computer they have good hardware interoperability. If you have a particular application you like to use and it works well with your computer, it is said to have good software interoperability. PC’s often have better software interoperability than Apple computers, but why? The main reason is that the entire PC ecosystem was designed for interoperability. Often the software is from Microsoft, the processor from AMD or Intel, the computer from another 3rd party, etc. Apple’s ecosystem, on the other hand, is an example of a closed and proprietary model. Admittedly Apple has done well over the last few years (when Steve Jobs was there), hitting a market valuation that almost exceeded the combined market values of IBM, Google, and Microsoft. But at what cost to consumers? There are over 4 million desktop applications that run on Windows platforms, including QuickBooks and ACT!, two applications that we use at work but that I cannot access directly because I am on a Mac. The above Mac vs. PC analogy is just that – an analogy.

In the corporate world, vendors that sell closed and proprietary systems do so to lock customers into their platform in an unnatural way. Once locked in, it is painful and costly to break back out. In addition, while you are locked in the proprietary vendor often charges you more than you would ordinarily pay for upgrades and enhancements of features and functionality. Why is this the case? Because you have removed yourself from the OSC market where competition forces vendors to innovate and compete on price and value of the solution they offer. In addition, any new innovation that occurs on the outside may not be available to you, because you are using a proprietary system that does not interoperate well with the new innovation. Technology literally changes with the speed of light. No one vendor, no matter how large, can offer best-of-class applications in all areas of solutions that a corporation is seeking.

When evaluating any technology solution for an organization, it is best to choose a vendor that is committed to innovation and charges reasonable prices for their solutions and services. Don’t choose a vendor that locks you to a proprietary solution that is both expensive and may fall behind in innovation. Clearly any organization that wants to keep up on the innovation cycle of technology and wants to keep their budgets in line will look for vendors that offer a solution that includes interoperability of both software and hardware. Look for my next post, which will continue the discussion of Open System Computing and will discuss the topic of portability. 

Request More Information