TWM is a company located in three different states in the United States. The company is involved in provision of consultant management services at national level. It has about 95 employees who either work as IT technicians, sales and marketing personnel, human resource managers, and administrators. The company has a database that accommodates information on consultants. The database access need to be controlled by use of a password, where sales personnel are required to key in consultant information, administrators are involved in querying consultants information, while customers access this information using an online interface which provide them with an opportunity to ask questions regarding a consultant. All this information is transferred from a server in each location to the main server in the headquarters where the data is merged. The company is currently experiencing growth and expansion, an aspect that overloads its database server and in this regard, affecting its performance. The company wishes to make a number of adjustments with its database system to ensure high-availability and data security. This paper guides on how the company can achieve this.
Database Growth and Design
The database design is highly influenced by the rate of the database growth or business growth in an organization. Business growth results to increase in a number of stored records and thus, resulting to the growth of the database. This database growth can easily impact the performance since the processing power of the server remains the same despite of a huge growth of the database. This implies that, to maintain a good performance level, considering database growth in physical database design is very important. Planning indexes and tables storage is normally done in the physical design stage so as to enhance performance and simplify tasks of data administration. When structuring physical storage, it is important to consider both the size and placement of indexes and tables. Basically, all database applications performance is bounded on I/O. To enhance I/O throughput, the database design should consider developing tables that are physically separated rather than joining them together. The aim of doing this is to maximize on parallel database write or read (Lingtstone et al., 2010).
Physical database planning also influences the two key database administrators concerns which include data fragmentation and space management. Although a small database can manage to operate effectively with fragmented data, this may not be the case where the data volume is high and where it keeps on increasing on daily basis. Therefore, proper planning is required in the beginning to ensure proper space management technique is employment to accommodate database growth. The design should consider the rate of database growth so as to ensure that all aspects related to enhancing huge database performance such as data compression are implemented during the database design. In this case, the 5% annual rate of database growth will be considered while designing the database to ensure that it caters for volume of database for a longer period of time. This will ensure that the database can still perform effectively even after a huge rate of growth (Lingtstone et al., 2014).
Potential Concerns Regarding SQL Server Upgrading
There are a number of concerns related to upgrading of the SQL server from 2008 windows server to 2012 widows server. One of the major concerns is the version to be installed. The two servers must be of the same version. This also brings in the issue of hardware requirements. Although the two servers have almost identical hardware requirements, server 2008 has a version that can operate in 32 bits server while server 2012 minimum bits is 64. In case the server 2008 was operating in a 32 bits machine, then anew server with a minimum of 64 bits will be required. Therefore, the company will be required to evaluate the 2008 server version used and also the hardware specifications to satisfy that the current server can manage to handle the 2012 server requirements. The other aspect to be considered is the form of upgrading to be done. Basically, the company should ensure that the upgrading process or the migration process does not affect the operation in the company. In this regard, a technique that lowers the downtime as much as possible should be considered. In this case, an upgrade can be done to ensure parallel running of the servers before 2008 server is switched off. This will enhance the testing process and ensure that server 2008 is only switched off after the administrator is sure that server 2012 can support the operations required (Technet Microsoft, 2013).
Login Account Recommendation
Basically, information security is very important since it determine the level in which an organization can use information to grow. Security ensures data availability, confidentiality and integrity. In this regard, any database administrator must ensure maximum level of security is observed. Basically, security in the database is enhanced by demanding for authentication, which is validated before one is granted permission into the database. In addition, the database system allows access rights where by a certain user account is blocked from viewing everything in the database while others are granted the right to view and edit everything. Encryption is also used to ensure data security in the database. In this case, the company should limit database access by employing rights limits techniques to junior staffs such sales persons and consultants. Consultants can adhere to the current password policy by setting their rights using active database rights management services aspect of active directory as well as the active directory domain service (Technet Microsoft, 2013). In this case, consultants can be provided with online account guided by active directory domain service to register into the system. In this case, they will be allowed to view their details and to see inquiring posted by customers about them. They can respond to the inquiry to the customers, however, they should not be allowed to change anything in the database. For any needed change, a consultant must go through the sales person involved in keying in the data and a protocol will be followed from there.
Two Ways to Address High Availability Issue
High-availability in computing refers to a system which is structure to prevent loss of service by managing and reducing failures and lowering planned system downtime. In this case, outages are completely unacceptable and if they happen they are not supposed to be even noticeable. To ensure this kind of system there are a number of techniques that can be employed. The two recommended techniques in this layout include data partitioning and building a fault-tolerance system. The two technologies are recommended since they reduce outrage frequency and the time required to access data and shared file in the system. Data partitioning involves saving different data in different servers despite of the data being used or needed together in an application. This can be implemented by use of Microsoft Dfs technology that allows one to develop a system of virtual file from the physical nodes to the entire network. With this technology, the directory is structured such that the user perceives the files as they are originating from the same source despite of the distribution. This can easily be enhanced after distributing servers in different location. It also requires a reliable network system that will ease the access of all these servers. The fault –tolerance system involves the development of redundancy in the system such that the server content is duplicated elsewhere and in case of any failure everything can be accessed from the redundant system. This may require a duplication of all server used in the company to be able to accommodate the redundant capacity. However, this process has been made cheaper by the cloud services. The company may consider purchasing database-as-a-service to ensure that the company can access all resources stored in the database even after any form of failure that affect the system fully or partly. This is considered as a cheaper and more reliable technique of ensuring a faulty tolerance system (Sakellariadis, 2002).
Designed Replication Topology
Data replication entails coping and transferring data to a different location, characteristically near-real time or in real time. The replication in this case will involve storing branch content in one server and then replicating all these contents from the three branches and in the future the five branches to a server located in the main office. This basically means that the server in the main center acts as a redundant server. Basically, each branch will have a server handling all its entry, then this content will then be transmitted at real-time or near real time to the server in the headquarter based on the network condition. In this regard the latency time should be determined by the speed of the network which in the design will be assumed to be at real-time. The transfer of the data from the branches to the server should be automatic and thus, users’ knowledge or operations is not necessary in this case. The server in the headquarters should be configured to ensure that the data received from branches is consistent. There are a number of tools that assist in enhancing data consistency. These tools should be employed to ensure consistency by comparing the source and the received data.
Proposed Changes Basically, the company wishes to provide each branch with a server to handle its content. These servers will then be required to replicate their content to another server located in the company’s headquarters. Basically, the server in the company’s headquarters will accommodate data from all other sources via the company’s network system to create redundancy. In this case, the company is situated in three different states in America. In this regard, the company is operating using a wireless wide area network to transfer data from the branch officer to the main office