Skip to main content

Install and Configure MySQL on Ubuntu 16.04

I have been working on a project for some time. This tutorial is part of that project. I will talk about the project in another post.
Let's get back to the topic.
MySql is a general purpose free RDBMS. This is a very popular database in the opensource world.
I am using Ubuntu 16.04 for this tutorial.
You can check your OS by using below command.
lsb_release -a

The stable MySql package is available under Ubuntu repository.
Let's update the OS before we install MySql package. 

sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade

All the above commands will update the OS.
Once that is done you might need to restart the system, based on the what packages you are installing. If you are prompt to restart, then reboot your system.

In the next step, we will install MySql package using the below command.
sudo apt-get install mysql-server
During the installation, you will be prompt to set root password. Please remember the password. You will need it in other steps.
Once the installation is done, you can check the status of the MySql server and the version of MySql server.
To know the status: mysql -V
To know the process status: sudo netstat -tap | grep mysql

Here are my outputs for those commands.






If you would like to secure the server then you can use below script. This will run on already installed MySql server and change few policies. As I am installing for development purposes, not for production, I will leave it alone.

The script for secure installation: sudo mysql_secure_installation

Now let's connect to the MySql instance. Currently, MySql is installed as localhost.
code to connect: mysql -u root -p

I also got a Perl script to optimize the MySql server.
To use that script, you need to have git. Please install git using below command, if you don't have it already.

Install git: sudo apt-get install git

Now let's clone the git repo, and run the optimization code.

cd ~
git clone https://github.com/major/MySQLTuner-perl.git
cd MySQLTuner-perl
perl mysqltuner.pl --user root --pass='root_password'

After the tuner script install, you need to restart the MySQL.
sudo systemctl restart mysql.service

Now verify if everything running fine, by checking the mySql status and login to the MySQL server.

To know the status: mysql -V
To know the process status: sudo netstat -tap | grep mysql

You also can change the config file in MySql, if you want to connect to this instance from outside(not from localhost) world.
The config file present in "/etc/mysql/my.cnf"

Below screen print shoes how my config file looks like. Please note the starting line with "[mysqld]" and at the end with 
port = 3306
bind-address = 0.0.0.0

bind-address = 0.0.0.0, means I would like to connect to this instance, from any system, outside of localhost.

You also need to give privileges to the host you are connecting from.
Use below command to make it ready.

mysql -u root -p
Enter password: <enter password>
mysql>GRANT ALL ON *.* to root@'host ip' IDENTIFIED BY 'put-your-password';
mysql>FLUSH PRIVILEGES;
mysql>exit
then restart the service "sudo systemctl restart mysql.service"



If you are having a problem with log in to a local MySql server:


You can try as follows it works for me.
Start server:
sudo service mysql start
Now, Go to sock folder:
cd /var/run
Back up the sock:
sudo cp -rp ./mysqld ./mysqld.bak
Stop server:
sudo service mysql stop
Restore the sock:
sudo mv ./mysqld.bak ./mysqld
Start mysqld_safe:
 sudo mysqld_safe --skip-grant-tables --skip-networking &
Init mysql shell:
 mysql -u root
Change password:
Hence, First choose the database
mysql> use mysql;
Now enter below two queries:
mysql> update user set authentication_string=password('123456') where user='root';
mysql> update user set plugin="mysql_native_password" where User='root'; 
Now, everything will be ok.
mysql> flush privileges;
mysql> quit;
For checking:
mysql -u root -p

Comments

  1. I read this article, it is really informative one. Your way of writing and making things clear is very impressive. Thanking you for such an informative article.Buy Windows 10 Pro Product Key

    ReplyDelete

Post a Comment

Popular posts from this blog

How to download really big data sets for big data testing

For a long time, I have been working with big data technologies, like MapReduce, Spark, Hive, and very recently I have started working on AI/ML. For different types of bigdata framework testing and text analysis, I do have to do a large amount of data processing. We have a Hadoop cluster, where we usually do this. However recently, I had a situation where I had to crunch 100 GBs of data on my laptop. I didn't have the opportunity to put this data to our cluster, since it would require a lot of approval, working with admin to get space, opening up the firewall, etc. So I took up the challenge to get it done using my laptop. My system only has 16 Gb of ram and i5 processor. Another challenge was I do not have admin access, so I can not install any required software without approval. However, luckily I had Docker installed.  For processing the data I can use Spark on local mode as spark support parallel processing using CPU cores. As i5 has 4 cores and 4 threads, the sp...

Cloud Computations - Quick data analysis with AWS Athena, Glue and Databricks spark

Cloud Computations -  Quick data analysis with  AWS Athena, Glue and Databricks spark   Throughout my carrier, I always had a situation that I had to fix failing production jobs. Most of the time, the debug involved analysis of input data to figure out the error in the raw data. For the last ten years, I have also been doing data analysis to provide quick business insights. This often involves running a complex query on an extensive set of data. Most of the time, we do not have access to the production environment to debug a job or install the required packages. It's also advisable not to debug jobs in the production environment as it might have a negative performance impact or completely break the job. We have been using a few tools to debug, mainly Hive, Presto, Tableau, etc. These tools are not always the best option as often it's required to have custom code/ser-der/packa need to be used for debugging falling jobs because of data issues. I like to use spark, however; ...

HOW TO PARSE XML DATA TO A SAPRK DATAFRAME

Purpose :- In one of my project, I had a ton of XML data to perse and process. XML is an excellent format with tags, more like key-value pair. JSON also is almost the same, but more like strip down version of XML, So JSON is very lightweight while XML is heavy. Initially, we thought of using python to parse the data and convert it to JSON for the spark to process. However, the challenge is the size of the data. For the entire 566GB of data would take a long time for python to perse alone. So the obvious choice was the pyspark. We want to perse the data with the schema to a data frame for post-processing. However, I don't think, out of box pysaprk support XML format. This document will demonstrate how to work with XML in pyspark. This same method should work in spark with scala without significant changes. Option 1:- Use spark-xml parser from data bricks Data bricks have 2 xml parser; one spark compiles with scala 2.11 and another one with scala 2.12. Please make sure yo...