Dear readers of our blog, we'd like to recommend you to visit the main page of our website, where you can learn about our product SQLS*Plus and its advantages.
 
SQLS*Plus - best SQL Server command line reporting and automation tool! SQLS*Plus is several orders of magnitude better than SQL Server sqlcmd and osql command line tools.
 

REQUEST COMPLIMENTARY SQLS*PLUS LICENCE

Enteros UpBeat offers a patented database performance management SaaS platform. It proactively identifies root causes of complex revenue-impacting database performance issues across a growing number of RDBMS, NoSQL, and deep/machine learning database platforms. We support Oracle, SQL Server, IBM DB2, MongoDB, Casandra, MySQL, Amazon Aurora, and other database systems.

SQL Server – is a relational database management system (RDBMS)

3 June 2020

SQL Server - is a relational database management system (RDBMS)

Microsoft SQL Server is a relational database management system (RDBMS) developed by Microsoft Corporation. The main query language used is Transact-SQL, developed jointly by Microsoft and Sybase. Transact-SQL is an implementation of the ANSI/ISO standard for structured query language (SQL) with extensions. It is used to work with databases ranging in size from personal to large enterprise scale databases; it competes with other DBMS in this market segment.

SQL Server Release History

VersionYearTitleCode nameInternal version
1.0 (OS/2)1989SQL Server 1.0 (16 bit)Ashton-Tate / MS SQL Server
1.1 (OS/2)1991SQL Server 1.1 (16 bit)
4.21 (WinNT)1993SQL Server 4.21SQLNT
6.01995SQL Server 6.0SQL95
6.51996SQL Server 6.5Hydra
7.01998SQL Server 7.0Sphinx515
1999SQL Server 7.0 OLAP ToolsPalato mania
8.02000SQL Server 2000Shiloh539
8.02003SQL Server 2000 64-bitLiberty539
9.02005SQL Server 2005Yukon611 и 612
10.02008SQL Server 2008Katmai661
10.252010Azure SQL DBCloud Database or CloudDB
10.502010SQL Server 2008 R2Kilimanjaro (aka KJ)665
11.02012SQL Server 2012Denali706
12.02014SQL Server 2014SQL14782
13.02016SQL Server 2016

Prehistory (till 1986)

The development of client-server technologies in the second half of the 80s was due to the development of two key areas that have been actively developed since the late 70s: personal computers on the one hand, and computer networks on the other.

For a long time DBMS were available only for mainframes, and only due to the growth of home computer processors and mini-computers performance DBMS developers (such as Oracle) started to create corresponding versions of their products. One of the first RDBMS for PC was Oracle v3, released in 1983. At that time, few PC owners used them mainly for application development and testing.

The year 1986 was one of the key stages in DBMS development. By that time, several more DBMS development companies had appeared, and one of the most notable of them was Sybase company founded two years earlier. By 1986, Sybase had started to complete intelligent workstations (usually developed by Sun Microsystems or Apollo Computer) with database servers (developed, for example, by Oracle).

The client-server technology itself made it possible to separate the information processing modules (so-called back end) from the interface modules (so-called front end). Taking into account the constant growth of computer networks penetration, solution providers moved on to the tasks of distributing the rest of the tasks (e.g. formatting reports, checking data, etc.) among network workstations, leaving the server to perform only those tasks that require a centralized solution (data storage and protection, optimization of the query execution flow, etc.).

The DBMS developers themselves played a significant role in the transition from hierarchical to relational databases. Thus, by that time IBM was already gradually moving its customers from hierarchical DBMS (such as IMS) to DB2 and SQL/DS. New DBMS though conceded in IMS speed, but surpassed it in ease of programming and maintenance. DB2 deliveries quickly exceeded expectations, taking a significant market share in the first year of sales.

In September 1986, Gupta Technologies introduced its SQL Base development, a networked PC database server concept. Gupta was also one of the first to implement transparent access to IBM mainframes with DB2 running on them, providing direct access to data stored there without the need to download files or tables to the user’s workstation.

By the end of 1986, the use of SQL as the main language for working with data in the DBMS became almost universal. IBM, Oracle, Sybase and Gupta used similar SQL language syntax for sending messages from the DBMS client part (front end) to the server part (back end), which allowed combining client and server parts of different manufacturers.

In the same year, the American National Standards Institute approved the SQL language version as an international standard for data processing, which jeopardized the welfare of DBMS that did not have SQL language support. Thus, for example, the company Cullinet, though announced the support of SQL language in its DBMS for minicomputers, but because of the delay in its implementation lost its market share of the DBMS, yielding to IBM and its product DB2.

First steps of SQL Server (1985-1987)

By now, all of Microsoft’s developments were focused exclusively on home computers, and its most profitable product was the MS-DOS operating system. Client-server processing on personal computers had only gained popularity by 1986 and for this reason lay outside the company’s interests [source not listed 1339 days].

A year earlier, in June 1985, IBM and Microsoft had signed a Joint Development Agreement (JDA) containing only general provisions for future cooperation. In August 1985, the JDA was supplemented by a document codenamed “Phase II” containing plans for OS/2 development.

At that time, the product was listed as CP/DOS (Control Program/DOS according to IBM’s mainframe product naming policy), while Microsoft listed the product as DOS 5. In late 1986 – early 1987, the project was officially renamed to OS/2 to make it similar to the IBM PS/2 line of computers.

On April 2, 1987 OS/2 was announced (version 1.0 according to the press release was supposed to be released in the first quarter of 1988, but was eventually released in December 1987). According to the plans announced in April 1987, IBM planned to add DBMS functionality to OS/2, and using the concept developed by Gupta Technologies, which consisted in sending SQL queries to the host by a personal computer through network routers and returning only the results of the query execution as an answer.

In spite of the fact that for several years OS developers included some DBMS functions into their products, IBM’s idea of implementing a full-fledged DBMS embedded in the OS made many managers reconsider their point of view on PC as a suitable platform for implementing multiuser applications and client-server technology concept[1].

Shortly after the announcement, IBM also announced a special, improved version of this OS – OS/2 Extended Edition. This version was to be completed with OS/2 Database Manager and several other network and server solutions.

Although Database Manager was more focused on mainframes rather than personal computers, based on their common development IBM could offer customers a more profitable product than competitors. The need for its own development in the area of database management became obvious and very urgent for Microsoft.

To solve this problem Microsoft turned to Sybase, which at that moment had not yet released a commercial version of its product DataServer (it happened a bit later, in May 1987 and only for Sun workstations under UNIX control). The reason for this appeal was that the pre-release version of DataServer, although it was not a product designed for wide use, nevertheless, due to the implementation of new ideas (client-server architecture, in particular) the new DBMS received quite good reviews.

As a result of such agreement Microsoft would get exclusive rights to the DataServer version for OS/2 and all OS developed by Microsoft itself, and Sybase would get access to the part of the market occupied by Microsoft products (including the new OS/2) in addition to royalties from Microsoft.

Since the performance of home PCs is low, Sybase considered this segment of the market as the basis for further sales of its product for more efficient systems based on UNIX, especially since Microsoft due to its well-established distribution network could provide much higher sales of DataServer than Sybase itself.

On March 27, 1987 the president of Microsoft John Shirley (English) and one of the founders of Sybase Mark Hoffmann (who was also the president of the company at that time) signed the contract.

At that time, the lion’s share of the PC DBMS market was held by Ashton-Tate with its dBASE. As DataServer possessed some other possibilities in comparison with dBASE these products as potential competitors were not considered. It allowed Microsoft to make a deal with Ashton-Tate, according to which the latter was to promote DataServer among the community of users of its dBASE.

SQL Server 1.0 (1988—1989)

On January 13, 1988, a press conference was held in New York, at which it was announced that Ashton-Tate and Microsoft had joined forces to develop a new product called Ashton-Tate/Microsoft SQL Server. On the same day, a joint press release was issued with the announcement of a new product based on Sybase developments.

The preliminary date of the product release was the second half of 1988. As for the roles of companies in the development and promotion of the product, according to the press release Ashton-Tate was to be responsible for the control of development in the field of databases (as well as to provide its own development in this area), and Microsoft was given a similar role in the field of technology to work in local networks.

After the release of SQL Server Ashton-Tate was to obtain a license for the product from Microsoft and engage in retail sales around the world (both as a standalone product and with future versions of dBASE), and Microsoft – to supply the product to OEM hardware manufacturers.

SQL Server was already positioned as a relational DBASE with SQL language support and local network capability. In addition, support for joint work of SQL Server with dBASE or any other software for the workstation was announced. Much emphasis was placed on the client-server architecture of the product, which should separate the client functions (front-end), in which users will see the necessary data, and the server (back-end), which will store these data.

Also Ashton-Tate and Microsoft announced “three major innovations in the field of relational database technology”: support for stored procedures compiled by SQL Server and which will “significantly speed up” the selection of data, as well as maintain data integrity when working in a multi-user environment. The second innovation was announced as permanent availability of the kernel (without interrupting user actions) for administrative tasks, such as creating backups (backup) and their recovery.

A third innovation was announced to support technology that acts as a bridge between online transaction processing systems and PC databases. SQL Server itself was to be based on an “open platform” architecture, which would allow third-party software developers to create applications that use the network and multi-user capabilities of SQL Server.

At the same time, Bill Gates, who was then chairman of Microsoft, called the network “the most important computing platform for new and innovative applications. SQL Server should have been run on any OS/2-based network servers, including Microsoft OS/2 LAN Manaqer and IBM LAN Server, and should have interacted with workstations running OS/2, PC-DOS or MS-DOS.

Ashton-Tate considered SQL Server as an opportunity to win the DBASE market for home computers without abandoning further development of dBASE. At the same time, both products were to be offered to corporate customers as well. Microsoft expected to promote SQL Server as the basis for transaction-oriented systems, including various accounting systems, document libraries, research management systems and others.

To promote the new product, both companies have planned a number of different seminars and conferences, the first of which was the Microsoft Advanced Network Development Conference, scheduled for March 30-1 April in San Francisco and April 13-15 in New York.

Sybase, despite the fact that its name did not appear in the name of the new product, was in fact the main developer of all three companies. On the contrary, Microsoft’s contribution was quite small. A small team had already been formed in Sybase, whose task was to port the DataServer engine to OS/2 and also to port the DB-Library client interface to MS-DOS and OS/2. Microsoft was also responsible for testing and project management, and developed several additional utilities to facilitate the installation and administration of SQL Server 1.0.

The new product was conceived as a Sybase DataServer port on OS/2, which was to be sold both by Microsoft and Ashton-Tate. The new dBASE IV version being developed in parallel by Ashton-Tate also had to be available in the server version, which would allow using dBASE IV development language and tools to create client applications capable of working with the new SQL Server.

The new client-server model should have enabled dBASE to reach a new level of performance, enabling a much larger number of users to work with data than the then common shared file model could allow.

Beta Versions

Beta version of Ashton-Tate/Microsoft SQL Server was released on October 31, 1988 as part of SQL Server Network Developer’s Kit (hidden in MDK). This kit contained a pre-release version of SQL Server, documentation, application interface program libraries for SQL Server, and also Microsoft OS/2 LAN Manager.

The program libraries were designed to compile (by Microsoft’s own C language compiler) MS-DOS, Windows, or OS/2 applications designed to work with SQL Server over a local network. The set was sold exclusively for software development, but it was accompanied by a special coupon that allowed customers to upgrade the SQL Server version to the full version after its release.

MDK was sold directly to Ashton-Tate in the USA and Canada (and also to Microsoft in the USA) at a reduced price. Microsoft also offered a significant discount to developers who had already purchased Microsoft OS/2 Software Developer’s Kit or attended one of the Microsoft Advanced Network Development Conferences. In its turn, Ashton-Tate also offered a similar discount to the developers who attended Ashton-Tate Developer’s Conference in 1988.

MDK had a lot of errors and disadvantages, however, it worked on home computers (with a processor, for example, Intel 80286 with 10MHz, 6 MB of RAM and a 50 MB hard disk).

Get out

April 29, 1989 the official sale of Ashton-Tate/Microsoft SQL Server 1.0 began. Members of the team engaged in SQL Server, at a special event on team certification, held in Torrance, put on t-shirts with the inscription “Ashton-Tate SQL Server: On-Time and Proud of it”.

Tests of Infoworld magazine showed that Ashton-Tate/Microsoft SQL Server 1.0 even when working in the network with 24 workstations coped with the load faster than a normal database with multiuser mode (the most common type of database at that time), and with the used procedures could achieve a response speed of less than two seconds. Journalists also noted the ease and convenience of writing the test code [source not specified 1342 days].

The profile press was quite positive about the new product, however, sales were very low. Besides, the sales of OS/2 were disappointing, as many users didn’t want to switch from MS-DOS to OS/2. The picture was completed by the ability to create applications for SQL Server only in C language, because the release of the promised dBASE IV Server Edition from Ashton-Tate was postponed, and the same situation was with third-party developers of tools for SQL Server[source is not specified 1362 days].

Besides, competition also played a role: on the PC platform DBASE market by that time there already existed XDB from XDB, SQLBase from Gupta Technologies and OS/2 Extended Edition (in single-user mode) from IBM.

By 1990 the situation did not become better. Plans to jointly promote the product, as a result of which SQL Server was to gain a position in the large community of dBASE-developers, failed. Despite the postponement of the release date of the desktop version of dBASE IV (released in 1989), it still contained a large number of errors, due to which it earned a bad reputation.

The Server Edition, which should have simplified the development of high-performance applications for SQL Server, never came out. Development of applications in dBASE for SQL Server became a problem because development of single-user record oriented application differed radically from development of multiuser applications, for which it was necessary to solve arising problems with parallel execution of tasks, correct parallel work with data, and also low bandwidth of then local networks.

The first attempts to connect dBASE tools with SQL Server led to inefficient joint work of these products (for example, line data query turned into a problem and cursors with arbitrary line navigation did not exist at that time).

As a result, Ashton-Tate, which had held a leading position on the home PC DBMS market two years earlier, now had to fight for its existence, which in turn forced it to switch back to its main product dBASE. Meanwhile, Microsoft launched OS/2 LAN Manager under its own brand name (whereas initially it was only planned to supply OEM versions) and needed SQL Server to help lay the foundation for the development of client-server tools capable of working with Microsoft LAN Manager and Microsoft OS/2.

All this led to the decision to stop promoting SQL Server together, after which the product was slightly modified and presented already as Microsoft SQL Server.

SQL Server 1.1 (1990)

Even before the release of version 1.1, Microsoft officials (unlike independent analysts) predicted a sharp increase in sales of the new version of the product, but their hopes were not realized.

Microsoft SQL Server 1.1 was released in August 1990 as an update and replacement for Ashton-Tate/Microsoft SQL Server 1.0, which was sold in 1989.

At the time of release of version 1.1, Microsoft still did not consider SQL Server as a product capable of making a profit on its own – that is why it was only one of the applications for LAN Manager (Microsoft even started to create sales channels for partners of both products, although it had never sold retail solutions for LAN before).

A positive role should have been played by the rapid release of client applications from Borland and DataEase International, especially since several more such solutions were expected during the year (at that time they were conditionally called the “second generation”). But at the same time, no less important part of SQL Server – the package of installed protocols – was still under development.

TCP/IP-version of Net-Library library, the first of this package, was still at the stage of alpha-testing, and its DEC-NET- and SPX-versions were under development without any announced release date. In addition, the obvious complexity of client-server computing and the ongoing evolution of server and client applications led to the fact that the first sales of SQL Server 1.1 were very low.

The capabilities of SQL Server 1.1 were generally similar to those of version 1.0, but the new version contained a lot of bug fixes that appeared in version 1.0. In addition, SQL Server 1.1 supported the exchange of information also with the new client platform – Microsoft Windows 3.0, delivery of which began in May 1990 and caused a tangible reaction in the computer industry.

SQL Server 1.1 could now be configured much more easily for collaboration with LAN Manager, and the installation of the product for working with networks Novell and as a separate software development system was improved. The Basic Library for SQL Server, an interface between SQL Server and Microsoft Basic Professional Development System, was included. With this library, support for this language was added for the first time.

The SQL Server 1.1 client part could work with the new version of DB-Library, the interface between the client part and SQL Server, which was sent to some developers a month before the release of the new version of SQL Server itself.

The new version of DB-Library was an almost completely rewritten version of the previous version, so it took only 40 KB instead of the previous 80 KB, leaving more memory for DOS applications on client systems (now the developer received 250 KB for his application instead of the previous 50 KB obtained using static DB-Library libraries included with SQL Server 1.0).

The architecture of the installed connection protocol in DB-Library could now interact with clients on DOS, Windows and OS/2, and also supported access to Sybase SQL Server on other platforms. However, according to information from Microsoft and Sybase themselves, these drivers were still under active development.

SQL Server 1.1 licensing included the following options:

  • Full-featured version for 5 users (with the possibility of further increasing the number of users up to 10);
    a full-featured version for an unlimited number of users;
  • upgrade of the version for 5 users to the version for an unlimited number of users;
    version upgrade for SQL Server 1.0.

The ability to work with SQL Server 1.1 client systems from different vendors allowed the latter to sell Microsoft SQL Server 1.1 along with their own developments. The first members of SQL Business Partner Program were Ashton-Tate, Blyth Software, Database International, Revelation Technologies and Sybase.

These companies could make sales through the recently formed Microsoft Network Specialist channel, whose main task before that was to sell Microsoft LAN Manager, or sell to end users directly. Of these five partners at the time of release of the new version, only Ashton-Tate could offer users a client part for SQL Server – SQL Link for Framework III (there were about 40 such solutions available on the market at that time).

Database International stated that its solution Database SQL 1.0 will be available for purchase from September 14 of the same year. According to Microsoft, the remaining two partners planned to release their solutions (MS-SQL Server Bond for Advanced Revelation from Revelation Technologies and Omnis 5 from Blyth Software) in the third quarter of the same year.

The release of Dbase IV 1.1 Server Edition from Ashton-Tate, which was to support SQL Server, was expected before the end of 1990. In the first quarter of 1991, server interfaces were to be released to support other Dbase client systems, namely Arago Dbxl and Arago Quicksilver from Wordtech Systems.

In the third quarter of 1990, Access SQL (produced by Software Products International) and Q+E (produced by Pioneer Software), designed to organize direct communication between Microsoft Excel and SQL Server. In particular, Q+E provided the ability to essentially all Windows applications (including Windows 3.0), capable of working with connections on technology Dynamic Data Exchange, to interact with and SQL Server.

From a user’s perspective, Q+E 2.5 allowed users to view, merge and sort information in databases without having to write the appropriate SQL queries. And since DDE calls were built into the Q+E application itself, users, such as Excel, could perform subsequent data processing.

By early 1991, several dozen third-party software products could work with SQL Server. A significant role in this was played by support from SQL Server dynamic libraries implemented in Windows 3.0, and in SQL Server was implemented this support almost from the beginning of sales of Windows 3.0.

Thanks to this Microsoft SQL Server systematically began to win the position of the leader among the DBMS, focused on the Windows platform. Nevertheless, despite the improvement of the situation, the problem with the availability of tools that support development in languages other than C was still relevant.

In general, the policy of early and full support of applications for Windows 3.0 has led to the success of Microsoft SQL Server, and in addition, the obvious success of Windows as a platform also required changes in SQL Server and Microsoft itself.

In particular, the team at Microsoft, which was engaged in porting someone else’s product, gradually moved to full testing and project management, and then – to develop their own tools to facilitate the installation and administration of SQL Server. But despite the fact that together with SQL Server 1.1 Microsoft supplied its own client software and utilities, program libraries and tools for administration, SQL Server engine was still written by Sybase, while Microsoft did not even have access to the source code.

This model provided that in order to perform any change requests in SQL Server functionality (including error correction), Microsoft had to send these requests to Sybase, which made the appropriate changes. Microsoft wanted to create a full-fledged and independent SQL Server support team, which hired engineers who had experience working with databases.

But, not having access to the source code, the team faced the impossibility to solve customer-critical issues of product support. Besides, there was a problem with Microsoft dependence on Sybase in matters of correcting errors in the product, which resulted in insufficient speed of correcting critical errors declared by Microsoft.

At the beginning of 1991 Microsoft and Sybase reached an agreement according to which the first one got access to the source code of SQL Server, but only in read mode (i.e. without possibility to make changes). This agreement allowed the product support team (the so-called SQL Server group) to read the code to better understand the logic of the product in any unobvious situations.

In addition, Microsoft, taking the opportunity, gathered a small team of developers who engaged in studying the source code of SQL Server. This group was engaged in line-by-line research of the code in those parts of the program where there was a suspicion of an error, and made “virtual error corrections” (since they still had no opportunity to make changes in SQL Server code).

However, when such reports with source code parsing were sent to Sybase, the correction of Microsoft-critical errors became much faster. A few months later, in this mode of operation, in the middle of 1991, Microsoft finally got an opportunity to correct errors directly in the code.

But since the source code of the product was still controlled by Sybase, all the changes in the code were preliminary sent to it for checking. As a result, Microsoft developers became experts in SQL Server code, which allowed them to improve its support on the one hand, and pay more attention to its quality on the other.

SQL Server 1.11 (1991)

In 1991, Microsoft released an interim version – SQL Server 1.11. This release was due to the fact that the list of users by that time had already expanded significantly. Despite the fact that the client-server architecture was still not widespread, clients gradually switched to it.

But despite positive criticism from the profile press, SQL Server sales still left much to be desired. Most of this was due to the failure of OS/2. Home PC users preferred to upgrade to Windows 3.0 instead of the expected migration from MS-DOS to OS/2. As a result, Windows 3.0 sales were very high.

To spur sales of SQL Server and LAN Manager, Microsoft announced the launch of a special program to support independent software producers, under which each developer who met certain requirements could license the cut versions of these products (these versions only allowed third-party software to function, the so-called run-time versions) with a 40% discount and get six months of free support, as well as some other benefits.

As InfoWorld magazine wrote in late July 1991, when the new version of SQL Server was announced, Microsoft focused on improved networking and a new Windows database administration application. In particular, Microsoft promised to provide a copy of SQL Commander utility for registered SQL Server users for OS/2 within a year. The tool, called SQL Commander, was introduced earlier, in May 1990, by Datura Corp. (formerly known as Strategic Technologies Group).

This utility made it easy for database administrators to manage user accounts, table indexes, triggers and complex queries. However, as the critics pointed out, this tool almost completely coincided in functionality with another Microsoft utility – Server Administration Facility tool that came bundled with SQL Server.

In addition, support for Novell network administration tools, the latest OS/2 Requester 1.3 tool at that time and detailed technical documentation for Novell product users were added to SQL Server 1.11. Network improvements included improved work with Novell networks, added support for Banyan VINES 4.10 protocols, and client interaction with Sybase SQL Server on UNIX or VMS machines.

The changes also affected the license policy: the 5-user version was replaced by a ten-user version, and SQL Server 1.1 users with an unlimited number of users could get version 1.11 for free.

But on the other hand, SQL Server 1.11 had very tangible limitations, including scalability. It was a 16-bit product, because OS/2 supported only 16-bit address space for applications, and its performance was limited by the lack of high performance mechanisms in OS/2 itself, such as asynchronous I/O. Despite the fact that SQL Server on OS/2 at that time could cope with most tasks, nevertheless there was a limit, after which the server just began to “choke”.

There was no clearly defined limit, but SQL Server on OS/2 was used for working groups of up to 50 people. For larger groups, there was a version of Sybase SQL Server for high-performance systems based on UNIX or VMS. And here exactly the border of sales between Microsoft and Sybase passed.

Customers choosing the product from Microsoft wanted to be sure that their requests would not “outgrow” it. A large number of software tools developed for Microsoft SQL Server worked without serious modifications with Sybase SQL Server, and applications, whose requests could not be satisfied with OS/2, could easily be transferred to a more powerful UNIX-system.

This relationship was beneficial for both companies, because a large number of software tools for Microsoft SQL Server, as a rule, worked without problems with Sybase SQL Server, and applications, which OS/2 performance was not enough, easily worked with UNIX-servers.

SQL Server 4.2 (1991—1992)

Meanwhile, the competition on the DBMS market was gradually growing, as well as the customers’ requirements to the software they choose, which resulted in the compatibility and interaction issues coming to the fore during the development of the next version of SQL Server for Microsoft, as well as the necessity to implement new functionality in order to satisfy the customers’ requests.

Since a new version of the product was needed as soon as possible, Microsoft began developing the next version of SQL Server 1.11 shortly after the release.

However, a question arose about the next number of the new version. The point is that in parallel with the sales of Microsoft SQL Server 1.0, there was also Sybase SQL Server 3.0, which brought some highly used mechanisms to the PC DBMS market, such as text and image data types and browse mode. And the next version of Sybase SQL Server is version 4.0 for the most common platforms, and version 4.2 – for less common ones.

Thus, development of the new version of Microsoft SQL Server was actually based on the source code of Sybase SQL Server 4.2. Accordingly, for marketing purposes, the new version of Microsoft SQL Server also received the number 4.2.

However, back in May 1991, Microsoft and IBM announced the termination of joint development of OS/2, because by that time it had already become obvious that users preferred Windows over OS/2. Thus, Microsoft has decided to concentrate on further development of Windows, as well as on software for them.

By this time, Microsoft was already developing a new version of the microkernel-based operating system, codenamed NT. It was originally intended to be a new version of OS/2 (sometimes even referred to as OS/2 3.0). After cessation of joint development, it was decided to reorient the project to Windows, i.e. to add Windows style user interface and Win32 API, as a result of which the project received a new name – Microsoft Windows NT.

According to the plans available at that time, the first version of Windows NT should have been released not earlier than 2 years later, and Microsoft SQL Server should have been eventually moved to Windows NT, which did not seem to be the most reasonable move.

However, with all this Microsoft had to support the development of SQL Server for OS/2, while OS/2 was now in fact a competitive product for Microsoft itself. Microsoft went for it, because it had no alternative at that time.

Microsoft was developing SQL Server 4.2 for the upcoming OS/2 2.0, the first 32-bit version of OS/2. Since SQL Server 4.2 was to become a 32-bit version, its porting from the UNIX ruler seemed simpler, because in this case the problem with memory segmentation was not a pressing one. In theory, 32-bit SQL Server should have been more efficient.

There were a lot of articles in the press devoted to comparing performance on 16-bit and 32-bit platforms, and almost all the authors were sure that porting to 32-bit will give a significant performance gain (although some articles did specify under which conditions it will (or will not) be so). Memory addressing was considered as the main source of performance gain.

To execute it in the 16-bit segmented address space of OS/2 line 1.x at least 2 instructions were required: the first instruction loaded the necessary memory segment and the second one loaded the necessary address in this segment. With 32-bit addressing, there was no need for instructions to load the segment, and thus the memory could only be addressed with one instruction. According to some calculations, the potential total performance gain could be up to 20%.

At that time, many people wrongly believed that SQL Server should be running on a full 32-bit platform to be able to address more than 16 MB of memory. When running on OS/2 1.x, the application could only access 16MB of real memory. And although there was an opportunity to get more than 16 Mb of virtual memory, but then it began to have its negative impact on page paging.

In OS/2 2.0, the application could address more than 16 MB of real memory and avoid swapping it. This, in turn, allowed SQL Server to have a large cache and get all the necessary data from memory faster than from the disk, which had a very positive impact on performance.

However, in terms of applications, all memory in OS/2 was virtual (both 1.x and 2.x versions), so even the 16-bit version of SQL Server could take advantage of OS/2 2.0 to access more real memory space. From this point of view, the 32-bit version was simply not needed.

However, as it turned out, the first beta versions of OS/2 2.0 were much slower than OS/2 1.x, thus negating all the advantages of the new approach to memory addressing. As a result, users instead of the expected performance gain observed a serious drop in performance when launching the first builds of 32-bit SQL Server 4.2 on OS/2 2.0 (compared to SQL Server 1.1).

Unexpectedly, plans to release OS/2 2.0 by the end of 1991 were revised. Actually, it became not clear, whether IBM will be engaged in release of version 2.0 at all. That is, it was not necessary to wait for the release of OS/2 2.0 before the end of 1992.

In view of this situation, Microsoft had to return to 16-bit implementation of SQL Server and redirect it to OS/2 1.3. It took Microsoft developers about three months to return to the 16-bit implementation, but there was another problem: IBM version of OS/2 1.3 worked only on its licensed computers. In theory, third-party PC manufacturers could license OS/2 from Microsoft and deliver it as part of an OEM agreement.

However, demand for OS/2 fell so much that most OEMs chose not to contact it, making buying OS/2 for third party PCs a problem. As a temporary solution to the problem, Microsoft created a shortened version of OS/2 version 1.3 (codename Tiger), which came in the same box as Microsoft SQL Server and Microsoft LAN Manager. The fact that OS/2 is already a “dead” OS became more and more obvious.

The Microsoft SQL Server 4.2 beta test began in the fall of 1991, and in January 1992 at the San Francisco Conference for Software Developers Using Microsoft SQL Server, Bill Gates, then head (CEO) of Microsoft, and Bob Epstein, founder and CEO of Sybase, officially announced the product. Version 4.2 was the first truly collaborative version.

The database engine was ported from the UNIX version 4.2 source code, with Microsoft and Sybase engineers working together to port and fix bugs. Moreover, Microsoft created client interface libraries for MS-DOS, Windows and OS/2, and for the first time a tool with Windows GUI was included to simplify administration. Source code was combined at Sybase headquarters, and source code files were sent there via modem connection or by copying and sending magnetic tapes.

Microsoft SQL Server 4.2 was shipped in March 1992. Criticism in the profile press was quite favorable, as well as feedback from buyers of the product. However, after the start of deliveries in 1992, many people were wondering about the timing of the 32-bit version of SQL Server. As further events showed, this version of the engine was the latest version received by Microsoft from Sybase (not counting a few bug fixes that companies continued to share for some time).

SQL Server for Windows NT (1992—1993)

In early 1992, the SQL Server development team was at a crossroads. On the one hand, there was already a SQL Server client base, which had the DBMS installed on OS/2. These customers were already waiting for the 32-bit version of SQL Server, and the version for OS/2 2.0 and preferably immediately after its release OS/2 2.0 by IBM Corporation, that is, they wanted to stay on OS/2 in the foreseeable future.

But the problem was that it was not precisely known when OS/2 2.0 will be released. IBM representatives said the new version would be released in the fall of 1992. Many were skeptical about these words. For example, Steve Ballmer, senior vice president of Microsoft, publicly vowed that he would eat a floppy disk if IBM released its product in 1992.

On the other hand, SQL Server developers were required to move their product to Windows NT as soon as possible, and it is desirable that the beta versions of both products came out at about the same time. Windows NT at that time was considered a product of Microsoft upper price range, and from the point of view of developers this OS had several technical advantages in comparison with OS/2: asynchronous input-output, symmetric multithreading, portability to RISC-architecture.

In addition, although Microsoft decided in 1991 to return to the 16-bit version of SQL Server, but work on the 32-bit version did not stop. By March 1992, when version 4.2 was first released, tests showed that both versions (both 16-bit and 32-bit) on beta versions of OS/2 2.0 worked slower than the 16-bit version running on OS/2 1.3. This may have changed after the official release of OS/2 2.0, but the beta versions available at that time were more likely to indicate otherwise. In fact, the work was less productive and less stable.

Since development resources were limited, in the current situation Microsoft could not afford to develop for both platforms at once. Otherwise, developers would face an additional heap of problems, namely they would have to add an abstract layer into the product that would hide the differences between operating systems or conduct parallel development of two versions of the product.

Thus, the developers in Microsoft decided to stop working on the 32-bit version of SQL Server for OS/2 2.0, instead they were closely involved in the development of the product for Windows NT. During the development of the new version, the attention was not paid to the portability of SQL Server to OS/2 or other operating systems, instead it was decided to take all the advantages of Windows NT.

Accordingly, this approach actually put an end to the development of SQL Server for OS/2 in general, except for the support of already released versions and fixing errors in them. Microsoft also began to inform its customers that the appearance of future versions (including 32-bit version) of SQL Server for OS/2 2.0 will depend on the availability and volume of demand from customers, while it itself was reoriented to Windows NT.

The customers’ perception of such news was different: some of them understood such a position, others, whose business was directly dependent on OS/2, such news was not liked.

Meanwhile, Sybase also worked on a new version of its DBMS, which was to be called System 10. In this situation, as with the development of version 4.2, the developers needed to make the new version of Microsoft SQL Server compatible with System 10 and have the same ordinal number as the Sybase product, released for UNIX. Thus, there was a situation when the main goal for Microsoft was the victory of Windows NT over OS/2, and for Sybase – the success of its System 10.

Despite the fact that System 10 has not even been moved to beta testing, there were already discrepancies in the planning of both companies to release new versions of the product. For example, Microsoft wanted to port Microsoft SQL Server to Windows NT as soon as possible, as well as get a version of System 10 for Windows NT, OS/2 2.0 or both.

As a result, it was agreed that Microsoft would port SQL Server 4.2 for OS/2 to Windows NT, starting immediately, and Sybase would include Windows NT in the list of priority operating systems for System 10. Thus, Windows NT would be one of the first operating systems to receive the appropriate version of System 10. In turn, Microsoft will resume OS/2 support for Sybase, so that customers who wish to stay on OS/2 will be able to do so.

Although Microsoft had hoped that most clients would migrate to Windows NT, it was clear that 100% would not happen. Therefore, this kind of arrangement was even beneficial to Microsoft in this regard.

Also, such a plan of action had additional development benefits. The Microsoft development team had to work on a stable and proven version 4.2, which they were already perfectly aware of by then, making it very easy to migrate to a new operating system. Sybase, on the other hand, could fully concentrate on the development of the new version without worrying about the problems associated with the preliminary versions of the OS.

As a result, according to the plan available at that time (1992), both versions (System 10 and SQL Server for Windows NT) should have been released, and the company to continue joint development of the product.

The SQL Server development team at Microsoft started the accelerated development of the first version of SQL Server for Windows NT, as the team had to release the product within 90 days after the release of Windows NT, but according to the plans, they had to meet 30 days.

The expectation was that Windows NT was now actually the only platform for SQL Server, which meant that developers did not have to worry about porting issues, and in particular did not have to develop an abstract layer to hide the differences in operating systems. The role of the abstract layer was to play Windows NT itself, which was originally planned as a ported operating system, that is, it was supposed to release versions for different machine architectures.

As a consequence, developers have made a tight tie SQL Server functionality to the functionality of Windows NT, such as event processing in a single place, installing SQL Server as a service of Windows NT, exporting DBMS performance statistics in Windows NT Performance Monitor, etc. Since Windows NT provided for the possibility of running dynamic code by applications (using dynamic libraries), SQL Server was provided for the possibility of creating custom dynamic libraries by third-party developers.

As a result of such changes Microsoft SQL Server for Windows NT became very different from the original version 4.2 for OS/2, as it turned out that Microsoft actually rewrote the SQL Server kernel (the part of the program responsible for interacting with the OS) to work with Win32 API directly.

Another task when developing SQL Server for Windows NT was to facilitate the transition from existing installations to OS/2 to the new version of SQL Server and OS. The developers wanted all applications written for SQL Server 4.2 for OS/2 to be able to run without changes on SQL Server for Windows NT.

Since it was planned for Windows NT to be able to double-boot from MS-DOS or OS/2, the team decided that SQL Server for Windows NT should have the ability to directly read and write to databases created in SQL Server for OS/2. To achieve their goals, the developers redesigned the internal architecture of SQL Server, adding many functions for administration, networking and extensibility, while the addition of external functions in the engine core had to be abandoned.

Also, the task was the compatibility of SQL language dialects and capabilities of DBMS versions for Windows NT and OS/2, while the new features were supposed to be added to the version, which would be developed on the basis of System 10. To differentiate compatibility with version 4.2 for OS/2 and the new developments of Sybase, Microsoft decided to name its new version of SQL Server for Windows NT 4.2 (ie adding the version number), with the marketing name of the product should look like Microsoft SQL Server for Windows NT, and the internal designation – SQL NT.

In July 1992, Microsoft held a conference for software developers for the Windows NT platform and distributed alpha versions of Windows NT to conference participants. Despite the fact that the new version of SQL Server did not have the status even of “beta”, Microsoft immediately published through CompuServe 32-bit software libraries necessary for developers to port their applications from OS/2 and 16-bit versions of Windows on Windows NT.

Given the success of the NDK’s distribution to Windows 3.0 software developers in 1990, Microsoft hoped to repeat that success by providing developers with all the tools they needed to develop software for Windows NT.

In October 1992, Microsoft released the first beta version of SQL Server for Windows NT. This version had all the basic (of the declared) functionality, and all its components had full support for Win32. This version was distributed through more than one hundred sites. This number of sites was unprecedented for the DBMS as the typical number of sites allocated for distributing this software type usually did not exceed 10.

Parallel to the NDK mailing, the delivery of the SQL Server version for OS/2 was still going on (it continued next year). By March 1993, Microsoft had released a beta version of the product. This version (SQL Server Client/Server Development Kit (CSDK)) could be freely purchased for a small fee.

To support it, MIcrosoft organized an open forum on CompuServe and did not require anyone to sign a non-disclosure agreement. Thus, it was possible to implement more than three thousand CSDKs. By May 1993, the number of requests for support for the preliminary version of the product exceeded the number of requests for the OS/2 version. Despite the number of requests, the overall reaction to the preliminary version of the product was quite positive.

In July 1993, Microsoft released Windows NT 3.1. Within 30 days of its release, the SQL Server development team released the first version of Microsoft SQL Server for Windows NT. The release was very successful: sales of both the DBMS itself and the operating system for it were growing.

By early December 1993, a significant proportion of customers migrated from the OS/2 version to SQL Server for Windows NT. Surveys showed that those who had not yet migrated to the new version for Windows NT, planned to do so even despite the announcement of Sybase about its intention to develop System 10 for OS/2.

The transition from SQL Server for OS/2 to the application version for Windows NT was painless, and there was also a performance gain when upgrading to the new version. As a consequence, after 9 months SQL Server sales were already twice as high as at the beginning of this period. At the same time 90% of sales were on the new version for Windows NT, while the old version for OS/2 – the remaining 10%.

Internal Microsoft tests showed that the focus on a single platform (Windows NT) has borne fruit: SQL Server for Windows NT (running on cheaper hardware) exceeded the performance of DBMS running on UNIX and more expensive equipment. In September 1993, Compaq Computer Corporation published the first results of the Transaction Processing Council (TPC) test. At that time, according to the TPC-B tests, the most common indicator was $1000/TPS (transactions per second).

SQL Server running on a machine with two 66 MHz Pentium processors showed a result of 226 TPS at $440 per transaction, which was half as cheap as the previously published tests. In this case, most file servers running on minicomputers running UNIX, showed results not exceeding 100 TPS.

And although there were such DBMS, which showed significantly higher performance, but their price equivalent significantly exceeded the figure of $440/TPS. By comparison, 18 months earlier, the 226 TPS performance rate was the highest rate ever achieved by a mainframe or minicomputer.

SQL Server 6.0 (1993—1995)

Microsoft’s success has caused increased tension with Sybase. The situation on the DBMS market at the end of 1993 was already very different from 1987, when Microsoft and Sybase signed a contract. In 1993 Sybase was already a successful software company, yielding only to Oracle Corporation in the DBMS market.

Similarly, Microsoft has grown considerably since 1987. The team of SQL Server developers has grown together with it (in 1990 there were about a dozen of them, and in 1993 – already more than 50), not counting those who were engaged in marketing and product support. These developers already knew the internal mechanisms of SQL Server, and also had extensive experience in developing a similar version for Windows NT.

Thus, Microsoft already had all necessary resources for independent development of SQL Server, but the 1987 agreement with Sybase bound it, because the contract implied only licensing of its developments with Sybase. According to the existing limitations of this contract, Microsoft could not add new functionality or make any other changes in the code without prior agreement with Sybase.

Another reason for mutual dissatisfaction was the final divergence of needs in SQL Server development. For example, Microsoft developers wanted to integrate MAPI (Messaging API) support into SQL Server, but since this feature was specific to Windows, Sybase developers were not in a hurry to give any approval for its implementation, since in Sybase’s area of interest was to develop a product for UNIX, not for Windows NT.

As Sybase did not get any benefit from porting its product under other OSes, Microsoft initiatives started to meet more and more resistance from it. Actually, porting of version 4.2 on Windows NT was already a subject of disagreement as addition of functionality specific for Windows NT of version 4.2 essentially slowed down development of System 10 for other platforms.

Sybase was already developing its System 10 with the aim of simplifying further porting to various operating systems (including Windows NT), but from Microsoft’s point of view this meant abandoning the maximum possible use of Windows NT tools, as System 10 could not and would not be able to work on Windows NT as effectively as if it was originally designed for it.

All of this meant that the two companies didn’t really need each other anymore, and their 1987 agreement was no longer in effect. Microsoft SQL Server for Windows NT was already a viable alternative to Sybase SQL Server, working on UNIX, Novell NetWare and VMS.

Now customers could buy Microsoft SQL Server at a cost equal to the cost of a similar solution for UNIX, and MS SQL Server could run on less powerful (and therefore cheaper equipment) and for its administration required less qualified specialist. Royalties from Microsoft would make only a small share of Sybase’s income from sales of its products for UNIX. So both companies already fought for actually the same customers, but at the same time both understood that it was time to change the nature of their relationship.

On April 12, 1994, Microsoft and Sybase announced the end of SQL Server joint development. Each company decided to continue working on its own version of SQL Server. Microsoft got an opportunity to develop Microsoft SQL Server independently, without looking back at Sybase.

Sybase could now port System 10 to Windows NT without any hindrance (for the first time SQL Server with Sybase logo would be available on Windows NT platform as their agreement implied only Microsoft’s rights to develop for their platform). In this case, both products were supposed to support backward compatibility with existing applications for SQL Server, but in the future, this idea was not supported because of too different goals.

Sybase developed its product primarily focused on its compatibility with versions for UNIX, and Microsoft – compatibility with Windows NT. Soon, both lines of SQL Server began to compete directly with each other, and Microsoft again found itself in a dual situation: it had to support a competing product (System 10) on the Windows NT platform, because the availability of various products had a positive impact on OS sales.

In early 1994, the SQL Server development team planned to take the source code of Sybase System 10 for the new version, but breaking the agreement completely changed these plans. Except for a couple of fixes, the latest sources from Sybase were received in early 1992 (version 4.2 for OS/2).

Taking into account the fact that Sybase was going to release System 10 for Windows NT by the end of this year, from the users’ point of view it would be a good reason to upgrade its DBMS version by switching from Microsoft SQL Server 4.2 to Sybase System 10. In its turn, for Microsoft it meant losing its client base and therefore it was necessary to quickly prepare the response step.

An ambitious release was quickly planned for Microsoft, with many performance and functionality improvements. The future release was code-named SQL95, hinting at a planned release of Windows 95. In 1994, the issue of data replication by DBMS tools was topical, so replication became a cornerstone of the future release.

The same applied to the positioned cursors – the mechanism, according to the developers, is simply necessary to bridge the gap between applications focused on working with many records, and a relational database. None of the widespread DBMS at that time had a fully functional implementation of positioning cursors for client-server architecture, and the SQL Server development team believed that this mechanism will have a positive impact on the reputation of their product.

In addition, work was under way on a completely new set of management tools, codenamed Starfighter (later called SQL Server Enterprise Manager), which was planned to be included in the next version. The list of new features postpone more and more expanded.

The general reaction of clients to the news about Microsoft plans to develop SQL Server independently was quite negative. On June 14, 1994 in San Francisco Microsoft held a general conference for clients, analysts and journalists.

Jim Alchin, then senior vice president of Microsoft, spoke about plans for the future and the planned release of SQL95. The presented plans and projects were received with approval, but many openly expressed skepticism about the release date, doubting that Microsoft will be able to release the promised product by the end of 1995. The press even sarcastically referred to the new release as SQL97 and even SQL2000.

According to the internal plans, the developers were preparing to present the release in the first half of 1995. The first beta version was released in October 1994. At that time, Starfighter was not yet finished, but the server itself was already finished, and since it is when downloading the server that the load on sites with beta versions is greatest, it was decided to release its beta version first. The release was followed by a series of updates that lasted several months, along with an increase in the number of sites, which reached 2000 sites.

In addition, back in 1993, Microsoft decided that databases would be a key technology in a complete product line, and in late 1994, Microsoft began ordering expert advice from DEC and other key market vendors for development teams working on Microsoft Jet and SQL Server projects.

The purpose of these consultations was to plan components for a new generation of database products. During 1995, in parallel with the releases of the core team of SQL Server 6.0 and SQL Server 6.5, the second team developed a new query processor as part of a component that later evolved into the Microsoft Data Engine (MSDE).

In parallel with the development of MSDE, work was also done on OLE DB, a set of interfaces that would allow you to develop elements of the core SQL Server product as independent components. Such components would have the ability to interact with each other using the OLE DB layer.

For about nine months, work on SQL Server was also done at night. On June 14, 1995 the product was released under the name Microsoft SQL Server 6.0, thus meeting the internal deadlines. The release of the version was followed by many positive publications in the specialized press. InfoWorld magazine placed Microsoft SQL Server on the second place in the rating among DBMS according to the results of the second annual survey about 100 companies with the most innovative applications in the field of client-server technologies.

At the same time, SQL Server increased its share from 15% to 18% among respondents who indicated this DBMS as their choice, while the share of Oracle DBMS decreased from 24% to 19%, respectively. The share of Sybase has also increased from 12% to 14%. Three of the ten best applications noted by InfoWorld were created using Microsoft SQL Server.

SQL Server 6.5 (1995—1996)

However, there was also evidence that Microsoft SQL Server’s market share was much lower than such surveys showed. One of the problems was that Microsoft was still a newcomer to the DBMS sector. At that time the obvious leader was Oracle, and Sybase, Informix and IBM had serious positions.

In fact, there was a very alarming situation on the market for Microsoft as all these companies started to build their sales tactics, targeting them against Microsoft SQL Server. Thus Sybase, Informix and Oracle planned to release new versions of their products. As part of its SQL Server development strategy, Microsoft continued to actively strengthen its SQL Server development team, which at that time had a history of more than four years.

Both well-known professionals at the time were hired, such as Jim Gray, Dave Lomet and Phil Bernstein, as well as lesser-known developers, including former DEC employees working on Rdb.

After the release of version 6.0, work began on version 6.5. Under the new version, it was planned to implement those features that were deferred at the release of version 6.0, especially since the requirements to the DBMS have significantly increased during 18 months of its development.

For example, in 1995, the Internet and data transmission already played a major role. The release of version 6.5 should have satisfied these requests. The full-featured 6.5 beta version was released on December 15, 1995, using 150 beta sites. Official deliveries of the new version began in April 1996, about 10 months after the 6.0 release.

Tools to simplify the use of the product by users, enhanced distributed transaction support and other features were also added to the functionality. Also a certificate of compliance with the ANSI SQL language standard was obtained.

Later, in December 1997, in parallel with the release of the second beta version of SQL Server 7.0, a version of SQL Server 6.5 EE was released, which supported two-node failover clusters Microsoft Cluster Server, 8 processors and 3 GB of address space.

SQL Server 7.0 (1996—1998)

Development Rule
At the end of 1995, the development of the next version of SQL Server was started, codenamed Sphinx. Already at the first stage, the code of the future MSDE was added to the SQL Server code, and the development team working on it joined the main SQL Server development team.

The development of the new generation of SQL Server had one main goal: to redesign the entire database server engine in such a way as to allow users to scale SQL Server according to their wishes. This meant a consistent increase in capabilities to maximize the use of faster processors (as well as increasing their number) and the amount of memory available to the operating system.

In addition, such an extension should not restrict the ability to add new functionality to any of the components, i.e., it was also easy to add a new algorithm to the query processor code as to how to connect a new hard drive to the computer. In addition to such an extension, SQL Server had to support new classes of database applications, and this in turn meant the process of reverse engineering, namely the reduction of hardware requirements, so that the product could work and much weaker systems, such as home PCs or laptops.

In the near future, two goals were planned to be achieved through such redesign:

  • implement a full lock at the line level with an intelligent lock manager;
  • create a new query handler that supports mechanisms such as distributed heterogeneous queries and efficiently handles arbitrary queries.

One of the areas that aroused increased attention in development was to improve the quality of high-level applications, such as enterprise resource planning software. This was where scalability and usability was required, coupled with the high reliability of the database core.

Also, several algorithms were developed that automated most of the database configuration process and allowed the system to independently solve some of the configuration issues that were faced by the database administrator. According to these algorithms, Microsoft subsequently received several patents. Work was also done to provide a lock mechanism at the write level.

This mechanism would allow applications to address a specific line in the table rather than an entire page, which would significantly reduce the number of conflicts with multiple simultaneous changes of data in one table. In version 6.5, this mechanism had a very limited implementation, so the new version assumed the implementation of a complete record level lock.

In October 1996 Microsoft purchased the Plato technology from the Israeli company Panorama Software Systems. This technology was one of the implementations of OLAP technologies for DBMS.

At that time (as well as by the time SQL Server 7.0 was released in 1998) OLAP technology was considered very difficult to use and therefore was underutilized. Nevertheless, it was decided to integrate Plato into SQL Server 7.0 code, but taking into account the scaling requirements of the new version of SQL Server, as a result, it was necessary to redesign Plato to meet similar requirements.

The developers were tasked to turn it into a product that in terms of scalability, ease of use and integration with the corporation’s software would not differ from any Microsoft product. Later on, the OLAP server, which became one of the key add-ons of SQL Server 7.0, was named OLAP Services.

In December 1996, Microsoft Transaction Server 1.0 (codename Viper) was released, combining the functionality of a transaction monitor and an object query broker.

Beta
In June 1997, a limited first beta version of the new SQL Server 7.0 was released. In December of the same year, the second beta version of the product was sent out for testing to several hundred users. Because of the transition to a new architecture when updating the version of SQL Server users needed a complete change in databases and their structures.

To support the transition of customers to the new version was announced a special program 1K Challenge, in which 1000 customers could send SQL Server developers copies of their databases to port them to version 7.0. A special laboratory for checking the results of porting was created in the same Redmond campus, where the SQL Server development team was located.

Every week from February to August 1998, four or five third-party software companies sent their development teams to Microsoft for a week, during which they checked in the lab that their products would work with SQL Server 7.0 without any problems. Upon detection of any problems leading developers of SQL Server immediately engaged in their solution, previously discussed options with the guests.

In June 1998, was released version Beta 3 on a dedicated website. Together with the beta version was published and several examples of solutions to problems, demonstrating the new features of the product. In addition, a special news server was launched so that any Beta 3 user could report bugs or ask developers questions about new features of the product.

In total, over 120 thousand testers received SQL Server 7.0 Beta 3. This included companies that directly ordered the version through the Microsoft website, MSDN subscribers, and participants in the official Microsoft Beta Program (who receive beta versions of all Microsoft products as they are released).

Before the release of SQL Server 7.0, there were rumors about Microsoft’s intention to replace Access with a simplified version of its relational SQL Server database. The corporation refuted these rumors by stating that the new Access version for Office 2000 will have two alternative database cores: Jet, an already released, native storage environment for Access, and the new MSDE.

According to the information provided at the time, MSDE was not intended to be a built-in version of SQL Server, but rather a storage technology compatible with SQL Server that had the same component architecture, which in turn would allow developers, using Access as an interface module to SQL Server, to use MSDE to create applications that could scale from a desktop database to a relational “big brother” for SQL Server or SQL Server Enterprise.

Exit
On November 16, 1998, SQL Server 7.0 was publicly introduced at the COMDEX conference in Las Vegas. The new version was presented by Steve Ballmer personally. The main emphasis in his speech was to improve the performance of SQL Server 7.0 compared to the previous version. He also noted the issues related to scalability and availability of applications. According to him, “ERP system manufacturers such as Baan, PeopleSoft and SAP will be able to use this DBMS in almost all their projects, except perhaps the largest. According to his forecasts, within the next year and a half independent manufacturers should have created about 3 thousand applications for SQL Server 7.0. By the time this version was released, there were more than a dozen successful implementations, including in such major companies as HarperCollins, CBS Sportsline, Comcast Cellular and Southwest Securities [source not listed 1371 days]. And 10 of them have already switched to the new version, while Pennzoil, Barnes & Noble and HarperCollins Publishers had been testing it for several months. Representatives of Pennzoil, News America (HarperCollins division) and Barnes & Noble confirmed the increased productivity of the new version. In addition to the product SQL Server 7.0 on COMDEX was presented and a special server for uninterrupted work SQL Server 7.0, and ERP-systems manufacturer Baan Corporation introduced a set of applications BaanSeries ’99, designed exclusively for SQL Server 7.0.

The entire development cycle, according to Doug Leland, marketing manager of Microsoft’s SQL Server corporation, lasted 3.5 years. They also positioned the new version as “the first relational DBMS of Microsoft that supports all its 32-bit Windows operating systems”, and Microsoft had no plans to release versions of SQL Server for other operating systems.

The release of version 7.0 took place on December 2, 1998 as a build 7.00.623.07, and the code freeze took place on November 27, 1998. For free order the product became available in January 1999.

Many analysts considered the release of version 7.0 as “a significant step towards conquering the market of corporate computing systems”. In their opinion, Microsoft expected SQL Server 7.0 to become the corporate standard for databases thanks to its redesigned functionality. The addition of operational analytical processing in SQL Server 7.0 by analysts was seen as an event that “may become the most important event that has occurred in the OLAP market since its inception. The reason for this was the fact that OLAP systems at that time were designed exclusively for the corporate segment, and since Microsoft’s strategy was to create versions for home PCs as well, it makes OLAP technologies available for small companies, which in itself implies significant popularization of OLAP.

Another argument in favor of “standardization of SQL Server 7.0” was the ability of SQL Server to integrate with other corporate systems, which was critical in a heterogeneous multi-level environments and the presence of heterogeneous platforms and data warehouses. To advance in this area, Microsoft developed internal standards for data integration, such as OLE DB and ADO, and worked with third-party software vendors. However, competitors have criticized such standards, stating that “some of these standards are strictly internal”, which severely limits their use by third-party customers. Significant criticism included the OLE DB for OLAP standard, which Microsoft has proposed as an industry standard and at the same time as part of its data storage shell. For example, Jeff Jones, IBM’s Data Management Systems Marketing Program Manager, cited as a major drawback the fact that this standard was developed by Microsoft and not by any standardization consortium as widely practiced. To such criticism Microsoft representatives responded that the standard was developed with the participation of more than 60 data warehouse manufacturers.

Analysts noted that Microsoft had a good chance of achieving its goal. In favor of this was the active encouragement of third parties to create software for SQL Server 7.0, and the distribution model looked better than the competing Oracle and IBM, capable in the future to allow you to sell wholesale batches at a lower price and thus become a serious player in the market of corporate databases.

It was planned to sell the new version all over the world through resellers, first the original English version, and within the next two months versions in French, German, Spanish and Japanese were to appear. By the end of February 1999, Microsoft also planned to release the Chinese version.

In terms of product versions, it was planned to release the standard and corporate versions in three configurations each (depending on the number of users allowed). In addition, a special offer was announced, thanks to which users within 99 days of the announcement could upgrade their system to SQL Server or switch to it from competing DBMS, paying only $ 99 per user for SQL Server 7.0.

Analysts have suggested that these rates may force Microsoft’s competitors in the field of databases to reduce the traditionally high cost of their products (however, Oracle, for example, officially refused to take this step). Also, analysts were quite skeptical about the new version, believing that SQL Server 7.0 was designed primarily for low-end systems of the database market for Windows NT, especially since several beta testers have confirmed that the new version fully meets their requirements.

For example, Herb Edelstein, an analyst at Two Crows, said that “the low prices set by Microsoft are aimed at eliminating competition in this market”, while “even with all the new features SQL Server 7.0 will be able to solve only part of the problems faced by users of large corporations. Betsy Burton, an analyst at Gartner Group, believed that although the add-ons presented in the new version “deserve attention”, nevertheless “the overall reliability and scalability of the system is still in question”.

However, representatives of companies in which the new version was tested continued to positively characterize the product. In addition to previously mentioned companies, Mark Mitchell, system consultant at Applied Automation, and Joe Misyazhek, application support manager for the system used at Colorado Community College, also spoke positively about the product. They noted the affordable cost of the product, good performance and relative ease of use.

Microsoft’s response to this kind of move was backfired by competitors. For example, Oracle Corporation was forced to change its sales strategy. According to the distributed statement, the corporation had to start selling its DBMS installed on pre-configured “server devices” that use a simplified operating system developed with the participation of Oracle itself.

According to the head of Oracle, Larry Ellison, such innovations were to “reduce the cost of Oracle DBMS ownership and at the same time strengthen the competitiveness of the product in opposition to Microsoft SQL Server. In mid-November 1998, Oracle signed an agreement with Dell, Compaq, Hewlett-Packard and Sun Microsystems, according to which the sale of servers was to begin by the end of the first quarter of 1999.

The operating system to be installed on the new servers contained Solaris components (later on Linux components were to be used) and was so simple that the initiative was called Raw Iron. The Oracle partners were going to offer three types of servers (“small, medium and large” as Allison called them), pre-configured for specific tasks such as e-mail and IFS (Internet file system).

Thus, there should have been a transition from selling a boxed version of the DBMS to selling servers with much lower cost of ownership. This strategy, according to Allison, should have helped to poach some of the buyers of SQL Server.

SQL Server 2000 (1998—2000)

As in the previous times, work on SQL Server after the release of the seventh version has not stopped. In version 7.0 was included far from all the originally planned functionality, and in addition there were several other developments that were in the final stages, intended for inclusion in the next major release.

Thus, the development of two versions began: Shiloh – the “junior” release of version 7.0 (conditionally speaking 7.5 by analogy with the previous release), and Yukon – the next major release.

Initially, SQL Server product managers were reluctant to predict the popularity of SQL Server 7.0. The reason for this was that this release was based on a completely rewritten server engine code, so many customers considered it only as the first release and it was obvious that many of the potential customers of the seventh version would prefer to wait for some “corrected version” or at least the first service pack (service pack).

So Shiloh was originally planned as a super service pack, and it was planned to include functionality that was not included in version 7.0 due to tight development times, as well as fixing all bugs found at that time, which was too extensive for the usual “unnumbered” version. Accordingly, it was planned to release Shiloh not later than a year after the release of SQL Server 7.0.

However, several factors affected the change in the original concept of Shiloh. Firstly, contrary to expectations only a small part of clients doubted the need to move to version 7.0, and sales figures for new customers exceeded the most daring expectations. Feedback from customers was also quite welcome.

Even after the release of SQL Server 7.0, Microsoft continued to maintain a lab for third party developers, so that developers were constantly receiving feedback and comments from customers. Comments and bugs were easily corrected in the usual order and the fix package for SQL Server 7.0 was released in May 1999. The second bugfix package was released in March 2000. Thus, the need for a super-service package, which originally looked like Shiloh, no longer exists.

The second factor was functionality requests from customers. For example, the originally planned implementation of reference integrity control during cascade updates and deletions in the end was not included in SQL Server 7.0.

Customers were extremely interested in such a mechanism and demanded to implement it as soon as possible. In addition, there were numerous requests to implement support for sectioned views and optimize support for the star-shaped design scheme, which is widely used in applications for inventory accounting in warehouses.

Another factor was the competition between DBMS manufacturers, which required that the next release be bigger and better than originally planned. The “million-dollar challenge” set by Larry Ellison of Oracle Corporation had a great impact as well. It clearly revealed that part of the functionality that had already been implemented in Oracle DBMS but was still absent in SQL Server. Adding such functionality was far beyond a simple fix.

As a result, it was decided to make Shiloh a full-fledged major release with an 18-month development cycle, but retaining the official version number 7.5. The number of changes at that time was hard to predict, and the only change known at that time was the improvement of cascading updates and deletions.

It soon became clear that the release was already growing beyond the original plans. At the same time, the development team was also growing, moving from the main Microsoft campus to a part of the twin offices. The increase in the number of developers made it possible to add a large number of medium and small modifications to the product without any significant shift in the release time.

In addition to the tasks of improving and increasing functionality, the developers also set themselves the so-called “flexible tasks”. For example, they announced the need to achieve a 20% increase in performance for all types of applications, but in order to make the task more concrete, comparison with certain applications was made.

For instance, one of the main goals was to improve the performance in the R/3 Sales and Distribution performance test by at least 40%. To achieve this goal, the developers made special changes in the optimizer that directly affect the requests from SAP, but also improve the requests from other applications.

On February 17, 2000, at an event to celebrate the release of Windows 2000 in San Francisco, were announced the results of performance measurement in the Sales and Distribution test, which showed a tolerable load of 6700 users, which is significantly higher than SQL Server 7.0 (4500 users) on the same test and hardware (used eight-processor machine with Pentium III-550). Thus, the performance gain was 48%, which means that this task was completed.

After deciding to extend the development period to 18 months, another decision was made to add new functionality. This decision was kept in strict confidence and was not even discussed with many managers at Microsoft. The new functionality was not even mentioned after the first beta release in November 1999, and was not publicly announced until February at the Windows 2000 event.

This secret project, codenamed Coyote, was aimed at adding support for distributed sectioned views to SQL Server 2000, which would allow for high scalability when working with data. It was this functionality that allowed us to set a world record, which was announced in San Francisco in February 2000.

Initially, these scalability changes were conceived for the version following Shiloh, but since most of the necessary components were already actually ready, it was decided to add this functionality to SQL Server 2000. These changes included the extension of optimization of unifying views, as well as the ability to update such views.

The first beta version of Shiloh was released for the first tests and testing by beta testers in September 1999, and soon Microsoft announced that the official name of the new version of the product will be SQL Server 2000. There were two main reasons for this change of names.

First, because of the numerous and serious changes in the new version, it was unprofitable to release it as an intermediate version (7.5), and therefore needed a new number.

But secondly, if you release the new version as 8.0, it turns out that this is the only product from the whole family of Microsoft BackOffice that doesn’t have a 2000 prefix in its name. In order to maintain the unity of the product names, it was decided to name the product SQL Server 2000 (the internal version number still looked like 8.00.194).

From the user’s point of view, SQL Server 2000 gave him much more features than the previous version. SQL Server 7.0 had a completely rewritten engine, support for new stored structures, data access methods, record locking technologies, recovery algorithms, new transaction logging architecture, new memory architecture and optimizer.

But despite all this, from the point of view of the database developer or administrator, the changes and language improvements in SQL Server 7 were minimal. SQL Server 2000 had numerous language improvements, as well as major changes in the previously presented objects, such as tabular restrictions, views and triggers, which required all the developers and most database administrators.

Since internal engine changes were minimal, only two beta versions were planned. The second beta, released in April 2000, became a public beta and was sent to thousands of interested users, participants in specialized conferences, third-party software developers and consultants. The development team froze the code on August 6, 2000 at 8.00.194.01, and the product was released on August 9.

SQL Server 2005 (2000—2005)

The development of the next version of SQL Server, codenamed Yukon, began in parallel with the preparation of the 64-bit version of SQL Server 2000 codenamed Liberty. Liberty was essentially the same 32-bit version in terms of functionality, but the difference was in much greater scalability. The new functionality was to be implemented as part of Yukon.

In July 2002, as part of the official presentation of its new .NET Framework, Microsoft announced that the next version of SQL Server, codenamed Yukon, would be able to use the .NET platform. In particular, it was stated that it would be easier to manage distributed data in Yukon.

On April 24, 2003 at a conference in San Francisco dedicated to the release of Windows Server 2003, Microsoft announced the release of a 64-bit version of SQL Server 2000 (formerly known as Liberty). According to the published press release, the new version of SQL Server 2000 was designed to work together with the 64-bit version of Windows Server 2003.

The third product introduced together with Windows Server 2003 and the new version of SQL Server 2000 was Visual Studio .NET 2003. This trio of products, according to Microsoft’s idea, was the next stage of interconnection of operating system, SQL server and development environment, thus coming close to the transition to a single platform .NET Framework, which was much more fully implemented in the next version of SQL Server.

As part of the presentation, Steve Ballmer and Paul Otellini (en) said that the server with the installed new version of SQL Server 2000 set two new records according to the results of tests by a non-profit organization Transaction Processing Performance Council (en).

As in previous cases, the new version was pre-installed for testing by major Microsoft partners, including Cornell University, Information Resources, Inc. (en), JetBlue Airways, Liberty Medical Supply (en) and Johns Hopkins University, in response to what the official representatives of these organizations gave positive characteristics of the new product.

The purpose of releasing the 64-bit version was the desire to start occupying that part of the market which used to be fully occupied by high-performance solutions based on systems running UNIX.

In spite of the fact that functionality essentially remained unchanged in relation to the 32-bit version, the 64-bit version could work with a much larger amount of memory access to which was implemented by the 64-bit system Windows Server 2003, due to which the new version of SQL Server 2000 could scale up to the level of high-performance systems with which the 32-bit version could not compete due to its limitations. The customers of the 32-bit version were offered to switch to the new version without additional fees.

In November 2003, at the PASS conference in Seattle, Microsoft executives spoke about new ETL mechanisms implemented in Yukon, which were used to transfer previously accumulated information from existing applications to data warehouses.

From a Microsoft perspective, these mechanisms should have been one of the arguments for engaging enterprise users. The SQL Server ETL architecture implemented in Yukon was called Data Transformation Services (DTS). According to Gordon Mangione, Microsoft Vice President and Head of SQL Server Team, DTS planned to implement support for parallelism, so that users can simultaneously perform several complex tasks, such as data transmission, reading and rewriting in a single thread.

In addition to ETL, emphasis was placed on simplifying DBMS configuration and management as well as improving scalability. In particular, Microsoft said that, for example, a process that spans millions of data columns, due to increased scalability, could be executed in seconds rather than minutes.

In addition, the new version of SQL Server was planned to include functions that simplify the creation and management of data warehouses, as well as the execution of operations related to intelligent business support. Microsoft promised the developers a new .NET platform support (and Visual Basic language in particular), thus eliminating the need to use specific DTS code.

Also during the conference Mangione announced the completion of Best Practices Analyzer for SQL Server 2000, which supports a list of 70 rules compiled jointly by Microsoft developers and SQL Server users. This list was to simplify the process of DBMS configuration by database administrators and help them avoid the most common errors.

It supported the functions of backup and recovery after failures, as well as DBMS management and performance monitoring. Mangione promised that the corporation would update this toolkit quarterly.

SQL Server 2008 (2005—2008)

The version of SQL Server that was to replace SQL Server 2005 received the code name Katmai. During the period of active development, Microsoft was extremely reluctant to share information about the new version.

At the presentation of SQL Server 2005 Paul Flessner (at that time, he was the vice president of Microsoft, which was engaged in the development of SQL Server), said confidently that the new version will be released no later than two years after the release of SQL Server 2005.

However, in April 2007, there was no information about the soon release of the product, or at least about the beginning of its beta-testing. Nevertheless, in the Austrian blog on TechNet was published information about Katmai Technology Adoption Program (TAP), the beginning of which was supposedly planned for June 2007. Rumors were also mentioned that a new version would be released in 2008, but Microsoft had neither confirmed nor denied this information at the time.

Some sources tied Katmai’s release to Longhorn Server and Visual Studio Orcas, so according to this information the new version should have been released in the first half of 2008. Microsoft also refused to comment on this information.

However, some journalists interviewed by the corporation said that the rumors about Katmai’s release in 2008 were quite in line with Microsoft’s internal plans. And the corporation’s refusal to disclose any information about the new version was connected with the transition to a new development model, and it is because of this that Katmai was unlikely to be released in early 2008.

It was also mentioned that Katmai would not receive the official beta testing phase, but would instead receive public testing as part of Community Technology Preview (CTP). It was also claimed that some Microsoft clients had already had some parts of Katmai tested in April 2007 without having the entire release in hand.

As for the functionality of the new version, journalists wrote that Katmai will only be a development of SQL Server 2005, and not the new generation of the product, which at one time became SQL Server 2005.

SQL Server 2008 R2 (2008—2010)

Development
After the release of Microsoft SQL Server 2008.

Beta versions
Beta version of Microsoft SQL Server 2008 R2 was released in 2009.

Exit
SQL Server 2008 R2 officially became available for purchase on April 21, 2010.

At the end of 2010, the vice president of Microsoft Business Platform Division Ted Kammert said that the new version of “is being implemented very quickly”, in particular, for two months after the release of the product “it was downloaded through the Internet about 700 thousand times,” which, he said, was “the highest rate for the new version of SQL Server”.

SQL Server 2012 (2010—2012)

Development
In parallel, on June 30, 2010 Scott Guthrie in his blog announced the release of a new mobile version of SQL Server – SQL Server Compact 4.0, focused primarily on web applications based on ASP.NET 4 technology.

As the advantages of the new version Guthrie emphasized the absence of the need to install the program, as well as compatibility with the API .NET Framework (support for ADO.NET technologies, Entity Framework, NHibernate, etc., the ability to work with Visual Studio 2010 and Visual Web Developer 2010 Express) and the latest versions of SQL Server and SQL Azure at that time.

On July 7, 2010 Ambrish Mishra, SQL Server Compact project manager, in the official blog of SQL Server CE development team presented CTP1 version of the new SQL Server Compact 4.0.

The innovations (in addition to those mentioned by Scott Guthrie) included increased reliability, improved SHA 2 encryption algorithm, compatibility with Compact 3.5 database files, simplified installation (including support for WOW64 modes and 64-bit natural applications), reduced virtual memory usage, the Allow Partially Trusted Caller’s Attribute (APTCA) delegation technology, support for WebMatrix Beta and Visual Studio 2010, support for Paging Queries in T-SQL.

At the same time, the CTP1 version had some problems (incorrect work of uninstallation through the command line, compatibility problems with the then current version of ADO.NET Entity Framework CTP3, etc.).

Beta versions
During the PASS Summit conference held November 8-11, 2010 in Seattle, its participants (as well as MSDN and TechNet subscribers) were given copies of Denali CTP version (after some time this version was posted on the official Microsoft website).

At the conference itself, Ted Cammert and Quentin Clarke, general manager of Microsoft Database Systems Group, introduced the new version and spoke about the new feature of AlwaysOn and VertiPac technology (part of SQL Server analytics and data storage services). Also the emphasis was made on the development of business intelligence tools in the new version, interactive virtualization tools based on Web (Crescent project), as well as tools for developers under the code name Juneau.

On December 22, 2010 Ambrish Mishra in the official blog of the development team announced the release of SQL Server Compact 4.0 CTP2 version and Visual Studio 2010 toolset for working with this version of SQL Sever CE.

The final version of SQL Server Compact 4.0 was officially released on Microsoft website on January 12, 2011, thus completing the development phase of about a year.

On July 11, 2011 SQL Server development team in their official blog announced the release of Community Technology Preview 3 (CTP3) and the first service pack for SQL Server 2008 R2.

As the most significant innovations (relative to SQL Server 2008 R2) implemented in CTP3 version of the new product, analysts noted the SQL Server AlwaysOn component for creating database backups, the ability to install SQL Server in Windows Server Core environment, the column organization of data storage for faster query execution, improvements in T-SQL language (introduction of Sequence objects and window functions), the ability to track data changes (CDC) for Oracle DBMS, the possibility of user-defined server roles (previously they were rigidly fixed), Data Quality Services (knowledge bases that define metadata rules), a new data visualization tool called “Project Crescent”, support for standalone databases (to move between local instances of SQL Server and SQL Azure) and a new development environment SQL Server Developer Tools, codenamed Juneau.

SQL Server 2008 R2 SP1 contained bug fixes, which Microsoft received complaints from customers through the Windows Error Reporting service, as well as some improvements in functionality (Dynamic Management Views), increasing the speed of query execution using ForceSeek technology, technology Data-tier Application Component Framework (hidden DAC Fx) to simplify database updates, control the available hard disk space for PowerPivot).

Exit
It came out in 2012.

SQL Server 2014

At the end of 2010 (that is, before the release of SQL Server 2012) Ted Kammert, vice president of Microsoft Business Platform Division, in an interview told about plans for further development of the product (both versions of SQL Server 2012, and future versions).

In particular, Kammert said that work on SQL Server goes in the context of the ideas of Information Platform Vision, which is a set of various features that form the basis of the platform. SQL Server will continue to be a single product implemented in desktop systems, data centers and the “cloud” (both 32-bit and 64-bit versions).

Business intelligence (BI) will remain one of the priority areas. From Microsoft’s point of view, the priority in the field of business intelligence will remain the development of BI tools that implement the principle of self-service, as well as the development of an ecosystem of cloud computing.

In addition, Microsoft, when transferring business intelligence tools to the “cloud” is still working on the implementation of the principle of consistency with respect to the implemented programming models and tools (this implies, in particular, building capacity to work with SQL Server Management Studio SQL Azure environment).

Much attention is also paid to the issues of DBMS scaling (in this case the limit of SQL Server system should be increased to the threshold of several hundred terabytes), virtualization of applications in the database environment, as well as the spatial representation of data.

SQL Server 2014 release became available on April 1, 2014.

SQL Server 2019 –

Query faster from any database

 
Tags: , , , , , , , , ,

MORE NEWS

 

Preamble​​NoSql is not a replacement for SQL databases but is a valid alternative for many situations where standard SQL is not the best approach for...

Preamble​​MongoDB Conditional operators specify a condition to which the value of the document field shall correspond.Comparison Query Operators $eq...

5 Database management trends impacting database administrationIn the realm of database management systems, moreover half (52%) of your competitors feel...

The data type is defined as the type of data that any column or variable can store in MS SQL Server. What is the data type? When you create any table or...

Preamble​​MS SQL Server is a client-server architecture. MS SQL Server process starts with the client application sending a query.SQL Server accepts,...

First the basics: what is the master/slave?One database server (“master”) responds and can do anything. A lot of other database servers store copies of all...

Preamble​​Atom Hopper (based on Apache Abdera) for those who may not know is an open-source project sponsored by Rackspace. Today we will figure out how to...

Preamble​​MongoDB recently introduced its new aggregation structure. This structure provides a simpler solution for calculating aggregated values rather...

FlexibilityOne of the most advertised features of MongoDB is its flexibility.  Flexibility, however, is a double-edged sword. More flexibility means more...

Preamble​​SQLShell is a cross-platform command-line tool for SQL, similar to psql for PostgreSQL or MySQL command-line tool for MySQL.Why use it?If you...

Preamble​​Writing an application on top of the framework on top of the driver on top of the database is a bit like a game on the phone: you say “insert...

Preamble​​Oracle Coherence is a distributed cache that is functionally comparable with Memcached. In addition to the basic function of the API cache, it...

Preamble​​IBM pureXML, a proprietary XML database built on a relational mechanism (designed for puns) that offers both relational ( SQL / XML ) and...

  What is PostgreSQL array? In PostgreSQL we can define a column as an array of valid data types. The data type can be built-in, custom or enumerated....

Preamble​​If you are a Linux sysadmin or developer, there comes a time when you need to manage an Oracle database that can work in your environment.In this...

Preamble​​Starting with Microsoft SQL Server 2008, by default, the group of local administrators is no longer added to SQL Server administrators during the...