REQUEST COMPLIMENTARY SQLS*PLUS LICENCE
Database Design
Database Design – Widely known methods of designing databases (DB) have appeared in the course of development of more and more complex Information Systems (IS) which should consider requirements not one user, but the big groups and collectives. One such integrated database was created for the decision of many problems, each of which used only “own” part of the data, usually crossed with the parts used in other problems. Therefore, the most important design methods were methods of exclusion of redundancy in the data. These methods were linked to other means of ensuring logical data integrity.
Integrated database – an idea statement
A fundamental requirement was to separate programmes from integrated data. This principle is aimed at alienating data as a resource of the enterprise, it is also important that conservative data are separated from applications that may be subject to frequent changes.
Another important problem of database design was to provide the necessary operational parameters, such as the amount of external memory or the time of performing various operations. Other requirements are also known. For example, the information should not be lost not only because of failures of the equipment, but also because of an error of the user. This differs from the position at which the one who solves a certain task is responsible for the data safety for this task.
The understanding of the integrated database as a common information resource of the enterprise was formed. Storage data has become similar to a large computer, which is simultaneously used by many users with different purposes and must be able to work all the time.
Behind the idea is a classical design methodology
The classical database design methodology is a powerful and beautiful flow with its philosophy, ways of perception of reality and ways of existence in it. In this flow there appeared its own applied mathematics, its own concept of “World”, “Subject Area”. (RO) and their models. Concerning designing of a DB are realized and methods of performance of such design stages are integrated into harmonious schemes:
- gathering of data on DO (the analysis of requirements and the description DO with use of so-called “process” or up, “usage perspective” of the approach and “non-process” or isp, “information structure” of the approach);
- choosing the presentation language of the so-called “semantic” model for recording the information about the SW, their subsequent analysis and synthesis of the database model;
- analysis of collected data on DO: classification, formalization and integration of structural elements of DO description, formalization of both structural and procedural restrictions on the integrity of elements in the future DO model, determination of DO object instance dynamics;
- synthesis of the database conceptual model: design of the integral database conceptual scheme in the selected semantic modeling language;
- choosing a specific data model and DBMS for database implementation;
- designing a logical database schema for the selected DBMS (also called “implementation design”);
- developing the physical database structure (“physical” or “internal” scheme, aka “placement scheme”), including the database nesting;
- developing the technology and procedures of initial database creation and filling;
- developing the technology and procedures of database maintenance;
- developing the universal programs of accessing the database and the corresponding user interfaces;
- information support for the development of specific data processing programs: providing meta-information, control example data, etc.;
- receiving feedback from application developers and users of the Information System (IS) on the completeness and efficiency of the database organization;
- database testing, its development and improvement (adjustment) of its structure.
There are all grounds to call the methodology classical: for the specified methods full, integral methodical systems are developed, for the majority of methods formalized models are offered, these models – or at least their final expressive possibilities – have found real application in design practice.
The discipline of so-called structural analysis was used in the top-down design approach. Structurality is associated with the use of hierarchical structures for detailing data and functions, and corresponding rather “rigid” project procedures. The project scheme is called “cascade”. It is well coordinated with the similar scheme of designing software – software.
Behind the methodology – workshop of database design tools
Designing a complex, integrated, and usually large database has become a complex task. Presence of integral methodology of designing has allowed to take care of “the shoemaker-designer” and to begin to sew to it boots as systems of automation of designing of a database. It was promoted by presence of technological experience in the organisation and computer support of systems of software working out and, on the other hand, use of the active integrated dictionaries-help data (dd/d, data dictionary/directory). Thus, case (computer aided system engineering) – systems for the structural design of the database and related IPs, oriented to data models, implemented in different DBMS, appeared. The most popular were case-systems for relational DBMS with sql data models, and dd/d was renamed into the case-repository of the designed IP.
On this way two basic directions of development of case-systems and technologies of designing have arisen: case-systems for designing of a proper DB (or so-called upper-case) and the integrated tools allowing both to design a DB, and to develop applied programs using them. It is important to notice, that and upper-case in a general case have many means for the description of functions of processing of the information (at use of the process approach to gathering and the analysis of data on software) and storage of these descriptions in a repository. This confirms the strong link between the database project and the IP project based on this database. However, this relationship is not absolute and the principle of separating the DB from the programs remains.
Often the integration of functions leads to a strong splicing of the case-system with a single DBMS, on which the case-means of application development are oriented. Such splicing has several manifestations, for example, a case-repository is supported by the “native” but only DBMS means; generation of application programs is performed by the “native” development tools of the same DBMS but only by them. For such integrated case-systems the mapping of a conceptual database model to a logical scheme is often done also only for a predefined DBMS.
The latter fact is related to one more task that may be set when designing a database: designing a portable database that can be implemented on platforms of different types of computers, operating systems, DBMS and even data models, and, if necessary, transferred from one platform to another.
Taking into account the aforesaid, the classical DB Designer’s Workshop includes a set of classical structural design methods, a set of corresponding tools of modeling, realization, loading and support of a database, and also the “cascade” organizational scheme of performance of these works on a principle “from top to bottom”.
Especially – about time characteristics and transactions
Ensuring the database performance is still a difficult task despite the increase in the specific power of computers and the reduction in the specific cost of memory. Database operation time characteristics determination and preservation of these characteristics in the process of database operation are among the most difficult design tasks. At the design stages the following is necessary to determine the rational physical scheme of the database from the ways of determining the time characteristics:
- the possibilities of comparing the time parameters of the variants of implementing the different variants of the database scheme on some DBMS;
- the possibility of comparing the parameters of the variants of implementing one database scheme on different DBMS;
- the possibility of comparing the implementation parameters of one database schema on different hardware database servers;
- the possibility of predicting the temporary operation parameters of various application programs and utility service programs.
- The task of comparing the time parameters of different DBMS is considered an independent one. However, it must often be solved as a part of the design task of selecting a DBMS for the database being designed and in the process of this design.
The concept of the transaction was introduced to determine the complete database action set that transfers the database from one integral state in the logical sense into another. First of all the mechanisms of correct database actualization and recovery were built on its basis. However, then other mechanisms and methods were also based on this basis.
Recently the most popular DBMS tests have been temporarily evaluated as the number of transactions of a certain standardized type per time unit. Distributed processing is built on the basis of transaction monitors.
It will be necessary to detect the limits of the possibilities of dividing the work into rather small portions. Here we should note a very important effect: the practice of orientation to the “transaction approach” is closely connected with the classical database design methodology which has developed mainly as the design methodology of the so-called “operational” databases, i.e. the databases which must record the separate operations performed and store the model of the current actual state of the object or SW.
Assessing the status quo
It is possible that the reader has the impression that we already have a modern methodology or at least close to it, which, unfortunately, is not the case, and maybe we will never achieve anything like this. It is always easy to characterize the methodology on a conceptual level; it is very difficult to apply it in practice. The stumbling block is the difficulty of penetrating into the essence of the subject area (for example, the difficulty of understanding the mechanism of the organization’s activity) and of adapting it to new, perhaps better, conditions of functioning.
Similar problems are characteristic of the DBMS in general. The database system must become an organic element of the organization management system – this is the key to its successful application. However, the process of its implementation is connected with certain changes in the organization itself and in the activities of its employees, and we will always face the natural inertia of people when it comes to the perception of changes….
It is very important that the DBMS tools are adequate to the users’ needs. Since different users may need different data models, data languages and schemes, it is desirable that the DBMS supports many tools and the user can choose the most appropriate ones. …
One could, of course, question the value of such research. Indeed, no matter how bad a programming language is, you can learn it after all. Similarly, it is possible to master DBMS tools in a certain period of time. But the problem is not in mastering the means, but in the efficiency of their use. The machine must be a human servant, but not vice versa!
Since then, the DBMS, database design methods and corresponding tools have significantly increased in capabilities. But the rest of the world did not stand still either, and the tasks faced by IS and database developers have become much more complicated.
Which and how of the classical design methods is used in the practice of the present time.
Used in practice:
- Hierarchical “cascade” scheme of database structural design in the “top-down” approach;
- case-systems for the structural design of databases, IP as a whole, or, in addition, IP applications. The most frequently used: variants of er-model of the data; the tabular relational model extended by this or that additional set of means of descriptions of restrictions of integrity (reference integrity, business rules); for the analysis of “process” source of data models of streams of the data or, probably, extended by additional communications on management (these communications cannot be mixed up with the allocated streams of conditions of performance of functions in notation idef0);
Database dynamic administration utilities providing the following functions:
- tracking the database performance indicators dynamics: the indicators are available at any moment against the background of application work; these indicators (“statistics”) can be used to support optimal dynamic building of data access paths,
- creating database backup copies as well as maintaining hot backup database copies against the background of application operation, restoring and rolling back fragments and the full database,
- dynamic database reorganization (relocation of the database and individual physical fragments, logical and physical data restructuring) is possible, but these possibilities are limited.
Taking into account user requirements for data representation in a wider range than before. View-specific requirements have often evolved from the desirability of having different external data models to the desirability of having a large number of user tools with different interfaces and, in practice, different external data models.
What is lost
However, much of the classical heritage is not used or used badly in practice. First of all, it concerns use of the formalised methods and models unless they are not included in used data model directly, and should be applied by designers to reception and verification of high quality of design decisions, for example:
- full procedure of normalisation of high degrees and minimisation of a set of attitudes is not spent or is spent seldom, this possibility is seldom used in practice in view of its unwieldiness and high requirements to qualification of the designer using a case;
- Optimisation of placing of a DB on external memory devices is spent “on the eye”, tests of time parameters widespread today are not adapted for the help in the decision of this problem of designing;
- as “on the spyhole” optimization of placing of a DB on knots of the distributed DB is made.
Much less attention recently is given and to tool means of automation of physical designing of a database, including mathematical and natural modelling of characteristics of a database, including – taking into account placing on knots of the distributed IC. Optimization of the database placement on the nodes of the distributed database is not supported by the common case-means. Separate tools and works, including domestic researches, do not make “weather” in the Workshop of design tools, and do not support a live school of this direction.
There are, in our opinion, several reasons for this:
- High requirements to qualification of designers in the field of theoretical bases of classical design of a database;
- the cumbersomeness of the methods used in the “cascade” scheme under the conditions of practical impossibility to ensure the stability of large integrated solutions in a world with constantly changing IP requirements;
- the relative ease of performing the reorganization of the logical and physical database structure in relational DBMS (however, and this is specified at the end of the report, in modern conditions such an approach becomes one of the traps for the designer).
Limitations of classical methods
Classical models and methods focused on the organization of storage and processing of detailed structured data, which met the concept of “attribute” as an object property, represented by an atomic data element. As a result, for example, full-text databases were immediately distinguished into a special type of databases. For their design there was a separate flow – IPS or Information Search Systems.
Years and decades after the announcement of their classic models and concepts, the classics clearly described their limitations. For example, [codd79] stated that “when discussing the modelling of data semantics, the emphasis is on structural aspects at the expense of processing aspects. Structural aspects without corresponding operations or implicit methods are similar to anatomy in the absence of physiology”.
As early as 14 years later, E. Codd and the co-authors in [codd93] stated: “… possession of a large corporate database is of little importance if end users do not have the ability to easily synthesize the necessary information from these data stocks (warehouses). Too often we have exactly such circumstances.”
Finally, the time has come when the database (and IP in general) design has often become almost meaningless according to the classical rules of completeness and integrity. Wesley P. Melling (garthner group) pointed out in [Melling95]: “By 1990, almost all aspects of the ‘standard procedure for working’ with IT (Information Technology – E.Z.) had been challenged, and computing architectures had escaped from control. … Programming standards were blurred, and the notion of unreasonable, consistent, high-quality data was only good for a heap of junk.
The reasons for the new requirements
The phenomena of global computer communications and ubiquitous personal computing (together with the fall in the unit cost of computing facilities) have led to many new features of the database and their design. The list of the stored and processed data types has extended to the possible limits defined by the most general normative value of the “given” concept. Corporate databases include not only unformatted elements and full-text fragments, but also a database with geoinformation, multimedia databases, and this is not an exhaustive list.
Moreover, new IT opportunities – together with a number of purely economic reasons – have led to an increase in market opportunities and consumer demands, resulting in a sharp increase in competition in various industries and services. The answer was the announcement of the business engineering imperative: from bpr M. Hammer ([hammer93]) to the construction of cyber corporations by J. Martin ([Martin95]). These approaches require radical changes in the organization of the core business of enterprises. In particular, they require radical changes in the organization of the core business:
- a drastic reduction in time, number of employees and other costs of performing production functions;
- business globalization: work with clients and partners anywhere in the world, as well as work with the client in 24*365 mode;
- reliance on the growth of staff mobility, providing the employee with all opportunities for independent obtaining the final result;
- work on future needs of the client, accelerated promotion of new technologies.
If IT was one of the impulses to this development of the situation, they were also designed to ensure success and the very possibility of planned reconstructions. New requirements to corporate IP architecture arose, and as a consequence, new requirements to corporate databases.
Just as IP itself cannot be seen in isolation from its users, the new design must be seen as an integration of three components: business reengineering requirements, human factor and methods of new IT. Real integration of these three components, each of which acquired a qualitatively new content in the 90s, created the beginning of what can be called a New System Design – N.S.P..
What do you need from databases to respond to the new requirements?
Let’s show the new requirements to corporate databases on the example of two aspects of creation of new corporate IPs (out of more than two dozen types of work, which form the basis of N.S.P. – the following).
Providing maximum opportunities for each employee, that is, supporting the performance of all business functions by the employee who gets the final result. On the part of IP, DBMS and DBMS for this purpose is required:
- Providing means of access to all necessary data using distributed databases, data replication means, data event management and transaction processing processes;
- use of data warehouse architecture and software tools, data warehouse intelligence tools (olap), application rapid development tools (rad) for creating “manager IP” (eis), decision support tools (dss) based on the data warehouse, olap and rad/eis;
- application of dss tools based on case database analysis, as well as logical output methods, neural networks and neurocomputers, and others;
- offering a single user interface to work with different data components and applications, the use of tools in this interface that increase the ease of information retrieval and access to specific application functions, such as geoinformation system functions, hypertext, natural language, speech input.
Development of the concept and structure of a corporate database for a new IP, implementation of the database structure that implies removal (significant reduction) of restrictions on its development, including when changing functions or functional components of information processing:
- application of methods of component design of subject databases both for operational databases and for historical databases of data storages, document archives, geo-information and other data;
- working out of procedures of component change of a corporate DB at change of business procedures, kinds of activity, applied applications and geographical placing of the enterprise;
- constant updating of conceptual model of enterprise activity to account for new concepts arising from changes in application components on functionally similar and at change of types of activity of the enterprise, and construction on this basis of new interfaces between IP components;
- dynamic administration of fragments of the distributed corporate database at change of frequency of their use, at modification of their structure and at change of their placing.
New tools of the designer’s workshop that have entered into practice
The language sql, which was only one of the languages representing the relational model in the 80s, became a real standard not only for the relational data model, but also for industrial DBMS. (At the same time, it is an example of acquisition that can quickly become burdensome.)
In the real developments of the most widespread organizational and production IPs, in most cases or in most of the volumes of work, the development tools have been replaced with sql in a 3gl or 4g procedural language by 4g languages and tools with a window interface based on menu-driven control and using elements of the object-oriented programming concept (and preserving the possibilities of output in sql and procedural language).
Practically working standards of de facto interoperable work with data have appeared, first of all – odbc standard.
Multiplatformity has become the norm; multiprotocol communications for distributed databases are implemented on the basis of standards and interoperable transaction monitors; “internationalization” is supported at least in terms of settings for national data encoding tables.
New data structures and types, new data operations: unformatted elements, full-text databases and their processing, GIS-data, multimedia databases, hypertext distributed databases, distributed processing and processing, delivered together with the object to the input IP. In practice, the steps of real integration of the mentioned structures and operations are observed.
The approach to the selection of DBMS changes, first of all – for the design of corporate databases, operation and development of which is planned for at least several years. The economic grounds and criteria for DBMS selection are increasingly being used.
The object orientation in database design is not considered here as a new tool that already exists in practice. (I do not mean object-oriented programming.) At present it seems reasonable to refer such design still to the research areas.
To new approaches in database design organization
Since the new requirements are largely, if not decisively, associated with an increase in the rate of change in IP requirements, new approaches to design methods are inextricably linked to the new design organization.
Cascade schemes of the organization of designing of software for IP began to transform long enough to cyclic forms. So, the organization of continuing development ibm corp. (see [Fox85]) provided for continuous controlled development of the software system in the form of transfer of its new versions into operation.
Nowadays, various cyclic and spiral schemes are considered as a means of taking advantage of IP rapid prototyping approaches with the exception of their disadvantages (uncontrollability) by using classical structural methods on each coil of the spiral.
However, such cyclic schemes retain many old disadvantages of structural methods. For N.S.P. conditions, important disadvantages are:
- laboriousness of making changes to the existing components;
- limitations of possibilities of component designing using a complete set and re-completion of various ready components.
There are also others, for example, unwieldiness of conducting the design documentation. All it completely concerns also designing of a DB.
In the conditions of component designing the organizational scheme of designing of a DB should look as the scheme of parallel spiral designing of components of a DB and their complete set if necessary.
It is often possible to find statements that the object-oriented designing and programming solve the problems generated by the structural approach and remaining when using cyclic schemes. However, with the use of such technologies, issues of semantic interoperability, especially – at component design, can in reality become even more complicated because of encapsulated descriptions and less attention to the discipline of data documentation. It seems that to draw conclusions about limits of applicability of these approaches still early.
From new requirements for types and sources of information to new database architectural principles
The most important task of designing the corporate database architecture is to provide work with a variety of types and sources of information. According to the real practice, the sources and consumers of information are not only the subdivisions of this enterprise, the head office of the holding company or the Ministry’s office, but also the enterprises of other industries (possible suppliers and consumers, state regulatory bodies, etc.). The principle of business globalization dictates: the sources and consumers of information will be located in any geographic point where it is necessary.
Hence, the strategic solutions for the DB architecture and IP in general follow. Combining the requirements to the dynamics and diversity of the types of information flows processed in IP, taking into account the growth of their volumes, and requirements to the diversity of processing methods allows us to give the following generalized characteristic of the technologies forming the database architecture as a part of IP:
- component technology of designing and recompletion of subject-oriented operating databases that allow users to work through common interfaces, including those for the Data Warehouse;
- extended Data Warehouse technology integrating historical formatted data, archived text documents, sound and video archives, as well as cartographic data, and including means of operative analytical data processing, necessary types of “friendly” interfaces;
- openness of the database to including and receiving information from it using the Global Information Highway principles;
- The architecture of Open Systems extended by methods and means of component formation: at the top level it is an openness of component designing of a database and a free exchange with sources of the information of any external systems, at the bottom level – technological openness of a database on the basis of standards of portability, interoperability, scalability, etc..
The expanded technology of the Integral Storage forces on a new basis to put a question on working out of the integrated set of interfaces of the user which would create natural conditions for work with the information and functions irrespective of that to what class of the stored data the developer is compelled to carry today its (user) information.
To new approaches in database design methods
As the answer to new requirements it is possible to consider recommendations to new methods and tools of designing of a database. (Assuming, of course, that everything new is already discovered by someone else, the old one).
On exclusion of redundancy in data
It is kept as a reasonable requirement of a single data input in a database for the solution of different tasks and protection against contradictions (violation of logical integrity) at actualization of the stored data. However, in the conditions of a global information space and component designing the context of these requirements should be reconsidered. Undoubtedly, in operational databases it is rational to plan “islands” of the normalized and, in classical sense, defaulted clusters of relations or objects. These “islands” are most often and will be long known subject databases.
At the same time, association of historical data of Warehouses, DB of GIS-systems, archives of text documents, information streams received on the Information Highway, etc. in the general statement of designing of a corporate database leads to refusal of a universal and universal principle of an exception of redundancy: designing of a corporate database at level of the logic scheme and at conceptual level does not lean as global criterion on the requirement and procedures of an exception of redundancy in the data.
The new situation, including an essential and pre-regulated information flow from the Information Backbone to the corporate database, will require the development or enhancement of “procedures for identifying” the instances of information structures, i.e. determining that these instances describe the same subject of the real world.
The problem of conserving problems
The character of discipline of designing, provided by the cascade scheme, methods of structural designing and hierarchical approaches and structures, pushes now designers to fix rigidly enough certain models of software. The database design technology should be changed in such a way as to exclude preservation of existing problems of the enterprises in rigid, “seamlessly integrated” structures of a database. For this purpose it can be necessary to change not only technology, but also design tools.
Proposed approaches:
- the possibility to fix the descriptions of attributes, entities, relations, functions, etc. with any degree of incompleteness, the possibility to produce descriptions at the level of non-detailized, substantively related sets of information structures (“entity clusters”);
- design or reconstruction of models of IP and DB components, their integration in a common conceptual space;
- designing an ordered sequence of corporate database states as a sequence of sets of objects in operation, including: inherited databases, structurally predefined databases of “purchased” functional components, designed specifically for a given database enterprise, with the latter two categories of databases gradually replacing the inherited ones and, then and in parallel, replacing each other in the process of IP development;
- the openness of the case-system repository, DBMS dictionary and 4gl system, which allows building up meta-objects and mechanisms with the required thesaurus and deep semantic relations between the elements, as well as performing the bilateral exchange of meta-information with other 4gl and case systems, joining the models of different components into one using and saving all the necessary semantic relations.
Component openness and semantic interoperability
A component approach in IP development requires component database design. Replacement of some functional component of IP for similar, but designed by other developer, will demand structural replacement of some part of a corporate database. Such replacement should be supported as constant process of redesigning of a database. At replacement of a component of a DB the interfaces with it of available appendices and their users should receive precisely the same information in the semantic relation as earlier.
The actual database component design can be based on forming and using the concept model common for the components to be integrated and maintaining the correspondence between the database component models (and the applications connected with them) and the general concept model.
Conceptual model development and CCM
The necessity to use common conceptual models makes us reconsider the problem of designing the database of what is called NSI (normative-reference information) and CCM (classification and coding system).
Until now, there is a common opinion that CCM is a means of reducing the information presentation in an integrated database. In fact, the absence of CCMs or using incorrectly constructed CCMs leads to the semantic incompatibility of the information stored in different databases or even in one database. In these conditions the use of the most advanced technological interoperability modes will not lead to the achievement of the goals. Thus, it is expedient to use works on designing of a DB with НСИ and designing СКК as the beginning and a basis for creation of concept space of DB, for construction of concept model of activity of the enterprise.
Conceptual models and the subsequent design works
At subsequent design stages of the database itself, the conceptual model continues to be used with various purposes, for example:
- development of a set of different subject information models with highlighting common information entities;
developing functional models of different types; - developing semantically rich means of user support, etc.
Technological openness
In the IP of the new architecture the DBMS will become the defining but not the only component of the integrating software (including intermediate or middleware). Transaction and process monitors, semantic modeling and concept modeling tools, DBMS-independent application development and execution tools are other classes of software components ensuring the achievement of this goal.
It is recommended to maintain the independence from the DBMS by using the tools and standards covering different DBMS. Refusal to communicate with one DBMS, openness of the case-repository, possibility of developing the metamodels supported in the repository and the project procedures applied to them are only the minimum requirements to methods and tools.
It is expedient to be guided by the case-systems focused on possibility of parallel designing of components by independent developers (including – without use of the given case-system) with the subsequent integration of the meta-information.
Means of working out of appendices should satisfy requirements of mobility of appendices and, simultaneously, work in the heterogeneous environment of the distributed database.
Problems of volumes, time characteristics and physical design
Database distribution of vldb class requires more active use of methods of designing effective physical data schemes. It is impossible to build such databases relying on constant reorganization by copying them into new physical structures. This is true for operating databases of oltp mode, moreover it is true for terabyte olap-oriented databases. The simplicity of procedures of carrying out the reorganizations by the specified method can become a “trap” for the designer, especially – at the first stages of database input when its census is still possible because of the incomplete volume.
It is reasonable at the level of new technologies (application of multidimensional structures, bitmap indexes, etc.) to return to the methods of forecasting the database performance characteristics that would allow planning the physical scheme stability at least for the time when economic possibilities do not allow expanding the external memory of different levels for applying other approaches.
The large growth of database volumes will be accompanied by the growth of requirements to their reliability. Database design tools and methods will be directly adjoined by database management tools due to the constant process of database redesign. Thus, to provide the data resistant to failure it is necessary to possess the management and synchronization means of geographically separated shadow and reserve databases.
The Problem of Applicability Boundaries of Two Basic Design Methods
In the course of research and practical design the boundaries of applicability of two concepts should be defined: database design as an object consciously separated from the application programs, and object-oriented design, in which the object encapsulates both data and methods of their processing.
CONCLUSION
Creation of corporate databases in the conditions of New System Design – the activity using many methods of classical designing, but demanding other organisation and many additional methods, and also new which would replace some of those which have been developed 10 and more years ago.
Discipline of designing of a DB in new conditions is still absent. Nevertheless, its beginning is visible, its elements work in real projects.
In accordance with the principle of preserving immunity to computer revolutions, classical database design methods should continue to be used, but only in areas where they are really useful. The design methods considered in the concrete projects of corporate IP and DB and the corresponding tools should be tested on their capabilities to provide functions according to the New System Design requirements.
Database Design Course – Learn how to design and plan a database for beginners
MORE NEWS
PreambleNoSql is not a replacement for SQL databases but is a valid alternative for many situations where standard SQL is not the best approach for...
PreambleMongoDB Conditional operators specify a condition to which the value of the document field shall correspond.Comparison Query Operators $eq...
5 Database management trends impacting database administrationIn the realm of database management systems, moreover half (52%) of your competitors feel...
The data type is defined as the type of data that any column or variable can store in MS SQL Server. What is the data type? When you create any table or...
PreambleMS SQL Server is a client-server architecture. MS SQL Server process starts with the client application sending a query.SQL Server accepts,...
First the basics: what is the master/slave?One database server (“master”) responds and can do anything. A lot of other database servers store copies of all...
PreambleAtom Hopper (based on Apache Abdera) for those who may not know is an open-source project sponsored by Rackspace. Today we will figure out how to...
PreambleMongoDB recently introduced its new aggregation structure. This structure provides a simpler solution for calculating aggregated values rather...
FlexibilityOne of the most advertised features of MongoDB is its flexibility. Flexibility, however, is a double-edged sword. More flexibility means more...
PreambleSQLShell is a cross-platform command-line tool for SQL, similar to psql for PostgreSQL or MySQL command-line tool for MySQL.Why use it?If you...
PreambleWriting an application on top of the framework on top of the driver on top of the database is a bit like a game on the phone: you say “insert...
PreambleOracle Coherence is a distributed cache that is functionally comparable with Memcached. In addition to the basic function of the API cache, it...
PreambleIBM pureXML, a proprietary XML database built on a relational mechanism (designed for puns) that offers both relational ( SQL / XML ) and...
What is PostgreSQL array? In PostgreSQL we can define a column as an array of valid data types. The data type can be built-in, custom or enumerated....
PreambleIf you are a Linux sysadmin or developer, there comes a time when you need to manage an Oracle database that can work in your environment.In this...
PreambleStarting with Microsoft SQL Server 2008, by default, the group of local administrators is no longer added to SQL Server administrators during the...