BSA 728x90 Center Banner

5 Ways to Build a Better Database for Your Business

Businesses rely on data more these days. Anything from decision-making to getting a clearer view of operations can be done with higher accuracy with the right set of data in hand. A strong and efficient database (or databases) sits at the heart of it all. Businesses now maintain extensive databases on site and in the cloud for various reasons.

The conventional SQL database with the traditional table structure is usually sufficient for most operations, but that doesn’t mean you cannot take it a step further. For more complex business needs, building a better database is a necessity and not an option. As an engineer, these five tips will help you get started with building a better database for your business or business clients.

 

Create an Information Structure

Making an information structure is a step that even the most experienced database engineers often skip. It is a tedious process of defining the types of data that need to be collected, how to process them, and the kind of output that is expected from data processing routines. It is much simpler to just build the tables and work from there, isn’t it?

In the long run, not having a well-defined information structure isn’t always a good thing. When you start with creating an information structure, you also work on something very important - the objectives of building and maintaining a business database.

Let’s say the business needs to maintain a consistent picture of customers and their activities. You can then structure information in a way that allows queries to accurately depict customers’ behavior and interactions with the business. From that simple objective, it is also easier to determine the kind of data to gather (and when to collect them) and how details about customers need to be stored.

 

Integrate

Data fragmentation is a serious issue with many corporate databases. Instead of using information in a convergent way, businesses are stuck with separating data from different departments and parts of the operations, which means it is much harder to see the big picture.

This is a flaw that you can fix from the start. Data integration should not be a problem when you have a clear plan and a set of objectives in mind. Once again, having a good and well-defined information structure gives you a head start in this matter.

Every query can be constructed to retrieve and process data as needed, even when the data is stored in different tables. Advanced queries from multiple databases can also be done with certain programming languages. As long as you have a clear plan, data integration should not be a problem.

 

Maintain Transaction History

SQL Server now supports the use of Temporal Tables, which automatically maintains a full history of changes made to the data. Using Temporal Tables and system versioning, keeping track of time-sensitive data changes is actually easier than you think.

Having a complete transaction history enables you to do more things that can benefit the business. For example, the system automatically manages the validity period of each row, which means data processing and analysis can also utilize time sensitive entries accordingly.

When the business needs information on the latest inventory of certain products, for example, complex queries using commands like GROUP BY, WHERE, or FOR SYSTEM TIME are no longer needed. All you need is to call the data using the standard SELECT * FROM table-name, and you are all set.

Keep in mind that this simple query is also friendlier to the server. It doesn’t tax the server too much and you can keep server resource usage at an optimum level. You can do this while still enabling advanced data analysis.

 

Optimize for Transactional Queries

SQL isn’t made to be transactional by nature, but the way you set up the database can make transactional queries and other requirements easier to handle. Using add-ons like Transact-SQL or T-SQL, for example, you can add parameters like UDF and further optimize the business database. With careful planning, taking your business database to a whole new level is only a couple of steps away.

Transactional queries are useful for business users. Some of the commands are compatible with different versions of SQL Server, along with Azure SQL Database. Simple additions like correlated subqueries are immensely useful for when you need to build a complex set of data based on their relationship with each other.

You can also optimize supporting systems, including the business software that connects to the database, to use transactional queries. Parameters like UNION and HAVING are very useful for advanced data queries and analysis, all while making the process of acquiring, processing, and storing those data more manageable.

 

Think About Maintenance

Having a strong team of database specialists is the best way to keep the database itself well-maintained in the long run. When the people maintaining business databases know exactly what they are doing, they can do more than just keeping the database running.

Database cleanups and regular maintenance routines are among the things you can do to keep the database optimized. There are other advanced tasks to add to your maintenance routine, including performance optimizations and updates.

Many SQL training programs now focus on these long-term maintenance tasks as well as the development of a capable database using available tools. Investing in training your database specialist is a must, so why not find out more about the training here? There are a number of useful SQL training courses available right now and you should be able to find one in your area using the link provided.

With the tips and tricks that we covered in this article, building a stronger, more capable database for business is certainly easier to do. If you have your own database secrets that you want to share, be sure to leave them in the Comments section below.

 

How to Install Harbor on CentOS 7 using Bash

It's been quiet here on the blog, but I finally got around to getting something nifty out the door!

 

Harbor is an Open Source Project that is sponsored by VMware and is currently being sandboxed by the CNCF. It's a container registry that has all the bells and whistles that include Clair for CVE (critical vulnerability) scanning and Notary for image signing. 

 

I originally began playing with Harbor as a component of the Pivotal Container Service (PKS) package since it was all bundled and has automated deploy capabilities. After exploring what Harbor had to offer, I wanted to use it with my existing Kubernetes clusters that were built with kubeadm outside of PKS. I began by deploying the OVA into my vSphere environment and ran into issues and learned the OVA was being a deprecated form of installation (#5276). I decided to try using the online version of the installer that will pull images from DockerHub. I've been using CentOS a lot more than Ubuntu lately because it maps more to customer environments. So create a new CentOS 7 virtual machine from a template or build one out.

 

The installation and configuration directions on Harbor's README are a bit like a "choose your own adventure" book. For instance, "Install it like X if you want to use Y feature". The best thing about Harbor is that is has a bunch of features, so I wanted to use them all. In an effort to streamline this process and not figure it out line by line, it made more sense to turn this into a bash installation script!

 

The script will use the virtual machine's fully qualified domain name to automatically generate the files needed and will be using self-signed certificates for quick and easy usage. For my scenario, the virtual machine host name is harbor01 and the domain is vsphere.local. Once again, this is tailored for Cent OS 7. All commands are performed ON the harbor VM. If you want to push images from a different machine to the harbor instance, take the self-signed CA certificates within the `openssl` folder and place them on your machine in the locations shown for Docker and Notary.

 

Read more: How to Install Harbor on CentOS 7 using Bash

Closing My Chapter With The {code} Team

TL;DR Dell Technologies is no longer funding the open source initiative of The {code} Team (read more). I am looking for a new opportunity that touches on areas of containers, kubernetes, docker, cloud native, developer advocacy, golang, nodeJS, and more. Connect with me on LinkedIn, twitter, or view my resume.
 
In early 2014, I got a phone call from Brian Gracely about this idea to form a group to explore what open source means at EMC. What did it look like? No idea. There was no roadmap, sales pipeline, or product idea. Just a general concept of trying to get EMC recognized in the emerging trend of development and open source, ala the new kingmakers. It was up to us to make this successful. 
 
After months of waiting, the time had finally come. In October 2014, along with Clint Kitson and Jonas Rosland, EMC {code} was formed. We spent the better part of 4 months trying to find an identity. We developed small applications that sparked our interest from s3 migration tools to vagrant standardization to even a Photo Booth. We spoke at meetups and conferences on DevOps, NodeJS, and every other technology we knew about at the time. We evaluated emerging trends in the datacenter and tried to make sense on how we could be a part of it. We visited pre-sales engineering teams and got them up to speed on modern development practices. We were throwing stuff against the wall to see what would stick. 
 
In early 2015, the team noticed the container movement and narrowed its focus. This was when Docker was creating the Volume Interface within their experimental branch. REX-Ray and Docker were in their infant stages but it was decided to put all our effort into solving container persistence and making REX the best possible solution. {code} hired more engineers and expanded with a marketing presence. With this new blood, we had the ability to get our projects in front of larger audiences all over the globe. From there, we began solving persistence with other container platforms. The team developed the mechanism that allows Mesos to have data persistence (which is now merged upstream) and also began tackling Kubernetes integrations.
 
There’s a lot of achievements I’m overlooking, but fast forward to now. The team has secured a project moving towards a neutral governance home and worked with the community to build the Container Storage Interface that has been adopted by Kubernetes and Mesos with a commitment from Cloud Foundry to increase container adoption. Our projects were accompanied with successes, mistakes, new relationships, and growing the community built around it.
 
Read more: Closing My Chapter With The {code} Team

4 Factors to Consider when Picking a PCB Design Tool

If you pick engineering design software based on the wrong criteria, you will, at best, get a product that takes more time and money to utilize than you’d otherwise spend. At worst, you’ll buy software that fails to meet your needs and gets in the way of work. Here are four factors to consider when picking a printed circuit board tool.

Maximizing the Efficiency of Your Team

The best PCB design tools maximize the efficiency of your team by automating as many tasks as possible or simplifying the process to the greatest degree. For example, software that automatically checks for electromagnetic interference and thermal problems eliminates the need for your team to do that work, too.

PCB tools that make it easy to check the design’s dimensions relative to the rest of the assembly or export designs so you can send them to your board manufacturer for initial input are preferable over those that make these steps a chore. If exporting a design so you can ensure that it will work once built is time-consuming or frustrating, you’re unlikely to do it more than once. If the process is simple, you’ll be able to run such checks more than once without a lot more work.

Read more: 4 Factors to Consider when Picking a PCB Design Tool

My Constraints Aren’t Your Constraints: A Lesson to Learn with Containers

After digging through the details of the hottest new technology, have you immediately thought “we need to start using this tomorrow!”? This a common pitfall I see often. Buzzwords get tossed around so frequently that you feel that you are doing things the wrong way.

 

Let’s take Netflix as an example. Netflix is ubiquitously known as the company that made micro-service architecture popular (or better yet, Adrian Cockroft). Netflix’s goal of bringing streaming content consisted of lot of different services but needed an adjunct way of increasing the speed at which services can be updated. Amazon’s Jeff Besos is quoted with the API mandate saying “All teams will henceforth expose their data and functionality through service interfaces.” This was done to allow any BU, from marketing to e-commerce, to communicate and collect data over these APIs and make that data externally available. However, take a step back and think about what these companies are doing. Yes, they are pinnacles of modern technology advancement and software application architecture, but one is a streaming movie service and the other is a shopping cart (2002 is when this mandate came out). If my bank has externally facing APIs that only use basic auth, I’m finding a new bank. That’s a constraint.

 

What about your business? Most enterprises have roots so deep it is difficult, if not impossible, to lift and shift.

Read more: My Constraints Aren’t Your Constraints: A Lesson to Learn with Containers

New Site Sponsor and Free Tool from Vembu

This site is known for free tools. That's really where it became popular. I'm happy to announce that Vembu, a player in the virtualization backup and recovery space is now a sponsor. Please take minute to view their website and read the press release below. Give them a try since there is a free trial and even a free version!

 

CHENNAI - Feb 17, 2017 : Vembu, a rapidly evolving Backup & Disaster Recovery company, with their latest release, Vembu BDR Suite v3.7.0 has come up with a comprehensive free edition for data centers which deploy both virtual & physical environments. The free edition will be beneficial to all those who wish to try out Vembu BDR Suite in their production and testing environments without any costs.

 

Try the Free Trial here - https://www.vembu.com/vembu-bdr-suite-download/

 

Unlike other free edition softwares available in the market, The Vembu BDR Suite Free Edition will cover up all the major features needed for the Backup & Recovery for multiple requirements of a data center. Vembu BDR Suite covers the below environments as follows:

 

Free VMware Backup: Vembu VMBackup Free Edition for VMware is offering backing up of unlimited VMs running on an ESXi and vCenter server without any costs involved especially for the businesses which do not have any sophisticated data protection method to protect their VMs. Supports multiple VMware transport modes like Direct SAN, HotAdd and Network based (NBD & NBDSSL). VMBackup automatically analyses and chooses the appropriate transport mode. Vembu VMBackup provides fast and flexible Recovery options and also provides recovery of individual files and folder from the backed up data

 

Free Hyper-V Backup: Vembu VMBackup Free Edition for Hyper-V is built to overcome the complexities in creating backup policies for VM’s running on Hyper-V server. Vembu has developed its own proprietary driver to backup the Hyper-V VMs in an efficient manner especially with up to 5X improvement in performance over other backup software. VMBackup supports the VMs located in Hyper-V Cluster Shared Volumes and the Windows SMB share. Hyper-V backups take consistent snapshots application-specific highly transactional applications like Exchange Server, SQL Server using Microsoft VSS writer and truncate the transaction log files during the backup job. Vembu VMBackup free edition for Hyper-V is designed to protect the Microsoft’s server both at the Host level and VM level.

 

Free Windows Server Backup: Vembu was providing a free edition of its software only for workstations like Desktops & Laptops, but now it has been now extended to Windows Servers as well. Vembu ImageBackup Free Edition is a Backup and Disaster Recovery solution for physical Windows environment. It will backup entire disk image of Windows Servers, Desktops and Laptops including operating system, applications and files. Also, Vembu ImageBackup helps in migrating the windows machine from the physical environment to virtual environments like VMware or Hyper-V (P2V).

 

Read more: New Site Sponsor and Free Tool from Vembu

Building a Private Cloud with Containers: The Learning Curve

Within the IT community and outside of it there is growing interest in private and hybrid cloud architectures. Many organizations are considering, or already building, a virtualized infrastructure to achieve something like a public cloud on Azure or AWS, only on-premises using in-house resources.

 

In the days when VMware was the go-to virtualization technology, vRealize/vCloud was the obvious choice for orchestrating a private cloud, and later, it was OpenStack. Today, with the growing hype around container technologies, you would very likely consider using Kubernetes and Docker to set up a private cloud.

 

The discussion about whether to use virtual machines or containers has been going on for quite some time. Many questions have been raised regarding container setup and management, security, or about which applications are a better fit for containerization. More specifically, Robert Eriksson asks how complex Docker really is, while Kiran Oliver wonders whether Kubernetes is where is gets tricky – check out my recent post in which I show that Kubernetes isn't as difficult as it used to be.

Read more: Building a Private Cloud with Containers: The Learning Curve

Is 2017 The Year for Kubernetes?

The container space is full of leap frogging technology and it seems impossible to keep up with the pace. Only 2 years ago, Kubernetes was starting to get attention. Compared to the other solutions on the market, it was trailing in a distant 3rd place. It wasn’t stable and had a large learning curve, especially as containers themselves were already part of the learning curve.

 

However, this week in Seattle marks the final KubeCon as it transitions to Cloud Native Con on in 2017. The conference is oversold and packed tighter than a can of sardines. 7 months ago if you would have asked me how Kubernetes stacked up, I would have said that it doesn’t have a fighting chance. About 4 months ago, customers were asking the {code} team for integrations into Kubernetes so we can stay a part of the larger conversation. With a bit of hacking, Clint Kitson was able to develop a POC with REX-Ray and Kubernetes over a weekend. It all started becoming very real about 2 months ago when we realized that 75% of our customer interactions were all focusing on Kubernetes over competing technologies. 

 

What changed? Honestly, I don’t know. Perhaps the deployment, configuration, and architecture had stabilized. Did the technology leapfrog what others had to offer? Is the idea of Google being the core contributor the biggest selling point? Is everyone in love with Kelsey Hightower? Or maybe it was a combination of all that with community involvement. 

Read more: Is 2017 The Year for Kubernetes?

How to Use Volume Drivers and Storage with New Docker Service Command

Docker 1.12 brought a few exciting features, notably swarm mode. However, this new swarm mode brought a new docker command for your containers. Gone is the days of using docker run or docker ps for managing your containers. The new command uses docker service. This makes sense as our applications are turning into individual services the need some level of availability that Swarm now manages. But with it comes some subtle changes in regards to using volumes, volume-drivers, and storage (SAN, NAS, DAS).

 

Using the typical docker run command, we would utilize volume drivers through the --volume-driver flag. 

docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data postgres

 

 This is pretty easy to read and you know what it's doing. Specifying the volume-driver and then the host mount mapped to the container mount. You can also specify multiple volumes and only have to use the volume-driver flag once

docker run -d --volume-driver=rexray -v mypgdata:/var/lib/postgresql/data -v pgetc:/etc postgres

 

The new docker service commands brings a few new intricacies so how does this look?

docker service create --replicas 1 --name pg --mount type=volume,source=mypgdata,target=/var/lib/postgresql/data,volume-driver=rexray postgres

Read more: How to Use Volume Drivers and Storage with New Docker Service Command

VMTurbo rebranding to Turbonomic

Smart move in a world where "VM" or "vSomething" branding is moving away. I'm sure VMTurbo won't be the only company (or person) to rebrand themselves by the end of 2017. The focus has shifted away from VM monitoring which has become harder and harder to differentiate. Most monitoring programs are looking for their niche, and VMTurbo isn't any different. 

 

An excerpt from the full Press Release says:

the company announced it was rebranding to become Turbonomic, the autonomic cloud platform, to reflect customers’ embrace of real-time autonomic systems that enable their application workloads to self-manage across private and public cloud environments, continuously maintaining a healthy state of performance, efficiency and agility with no manual intervention required.

 

Read more: VMTurbo rebranding to Turbonomic

Page 1 of 32

Related Items

Related Tags