fstab : auto-mount / hide various drives on your system (with example)

The problem

Whenever I start my distro, be it Ubuntu or Fedora, I have to undergo an
additional step of mounting my drives.

Rather than obvious discomfort of manual mounting when you want to access data, it also has other negatives like –

  1. Some applications like DC and Clementine save path to download folder / music folder  and throw error when these drives are unmounted. This could be avoided if some drives are auto-mounted on boot.
  2. C: of windows is particularly problematic. Tinkering with this folder in linux may cause errors thrown in Windows. This could be avoided if the C: mounted in a read-only format.
  3. There are other partitions made by Windows which are not visible in windows but are plain visible in linux such as EFI System, DIAGS and PBR Image etc . These are better left alone as tinkering with them may cause failure in windows boot. This could be avoided if these drives are not visible in the list of drives. , i.e, auto hidden. There is no need to mount these drives anyway.

My system

I have a 1 TB hard disk with Windows 8 dual boot. Besides the partitions created by windows and windows C:, I also have two additional drives (D: and E: in windows) where I keep all my important data. This data should be accessible by both my linux distro and windows.

I. Gather system information

To check the various drives present on your system Use the blkid command.

$ sudo blkid
/dev/sda1: SEC_TYPE="msdos" LABEL="DellUtility" UUID="5450-4444" TYPE="vfat"
/dev/sda2: LABEL="RECOVERY" UUID="6640CCF340CCCB4F" TYPE="ntfs"
/dev/sda3: LABEL="OS" UUID="94C855F4C855D4D8" TYPE="ntfs"
/dev/sda5: LABEL="Data" UUID="CA1224B11224A503" TYPE="ntfs"
/dev/sda6: LABEL="Extra" UUID="92562FB9562F9CCB" TYPE="ntfs"
/dev/sda7: UUID="33df1e45-0b4d-4d11-8971-fb1957776554" TYPE="ext4"

I have a total of 6 partitions present on my hard disk. Recognise the various drives in this output.

We have,

  1. Windows recovery partition : “Recovery” of type ntfs having UUID : 6640CCF340CCCB4F. It should be hidded on boot.
  2. “DellUtility” partition of type vfat and UUID : 5450-4444. This is most probably boot related partition and is auto-hidden. So no changes required.
  3. A ntfs drive labelled “OS” with UUID : 94C855F4C855D4D8. This is the C: of the windows. It should be mounted with read only permissions.
  4. D: for windows having the label “Data” of type ntfs and UUID : CA1224B11224A503. It should be auto-mounted.
  5. E: for windows having the label “Extra” of type ntfs and UUID : 92562FB9562F9CCB. It should be auto-mounted.
  6. The linux ext4 type drive representing the root (“/”) of linux file system.

To help recognise the various drives, one can use the following commands.

1. sudo fdisk -l -u
2. lsblk

Output of lsblk on my system.
$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 698.7G  0 disk
├─sda1   8:1    0  39.2M  0 part
├─sda2   8:2    0  13.3G  0 part
├─sda3   8:3    0 106.3G  0 part
├─sda4   8:4    0     1K  0 part
├─sda5   8:5    0 244.1G  0 part
├─sda6   8:6    0 234.9G  0 part
└─sda7   8:7    0   100G  0 part /
sr0     11:0    1  1024M  0 rom



II. Make mount-points

For auto mounting the D: and E: and read-only mount of C:, we need locations to mount them. On manual mounting, the drives are generally present at /media or /run/media.
For auto-mount and read-only mount, we will create folders in /media folder and for hidden drives, we will create folder in /mnt. This is only by convention. Any path can be given.

1. For auto mount of D: named “Data”, we create a folder named /media/data 
   $ sudo mkdir /media/data

2. For auto mount of E: named “Extra”, we create a folder named /media/extra
   $ sudo mkdir /media/extra

3. For read only mount of C:, create a folder named /media/os (Name can be anything)
   $ sudo mkdir /media/os

4. For the other hidden drives, create corresponding folders in /mnt folder.
   $ sudo mkdir /mnt/recovery

III. Backup and edit the fstab

Now for the final part.

Open /etc/fstab file and edit it using the above details from blkid and locations of folders. The steps are as follows.

1. Backup the fstab file (Should anything fail.)
   $ sudo cp /etc/fstab /etc/fstab.bkp

2. Open fstab using any editor. I am using nano as an example.
   $ sudo nano /etc/fstab

This is what I get :

--------------------------------------------------------------------------------------------- # /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#              
# / was on /dev/sda7 during installation
UUID=33df1e45-0b4d-4d11-8971-fb1957776554 /               ext4    errors=remount-ro 0       1
---------------------------------------------------------------------------------------------

Comments start with a hash.
The format is as mentioned on the top of default fstab is
             

1. To add D: and E: to automount on boot, add these line to fstab

UUID=CA1224B11224A503  /media/data  ntfs-3g  defaults,uid=1000,gid=1000,windows_names,locale=en_US.utf8  0 0
UUID=92562FB9562F9CCB  /media/extra  ntfs-3g  defaults,uid=1000,gid=1000,windows_names,locale=en_US.utf8  0 0


The uid=1000,gid=1000 are important options. They enable trash collection in the auto mounted drives. In absence of this, whenever you try to trash any file, you get the ugly message “Cannot move file to trash, do you want to delete immediately?”.

2. For read-only mount of C:, add the following lines to the fstab 

UUID=94C855F4C855D4D8  /media/os  ntfs  defaults,umask=222  0 0

Here, umask=222 denotes the read-only option. It masks the write bit from permissions and hence no one has any write permission for the C: mounted at /media/os

3. For hiding of drives like the recovery, pbrimage etc, add the following lines to the fstab.

UUID=6640CCF340CCCB4F /mnt/recovery ntfs  noauto,umask=222  0 0

noauto along with read-only options are enough for hiding these drives.

Now, if you still want to mount any of the above drives, execute the following command using appropriate path. For ex.

$ sudo mount /mnt/recovery

The final file along with comments are shown here.

---------------------------------------------------------------------------------------------
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#              
# / was on /dev/sda7 during installation
UUID=33df1e45-0b4d-4d11-8971-fb1957776554 /               ext4    errors=remount-ro 0       1
#Read only mount of C:
UUID=94C855F4C855D4D8  /media/os  ntfs  defaults,umask=222  0 0
#Disable mounting of Recovery
UUID=6640CCF340CCCB4F /mnt/recovery ntfs  noauto,umask=222  0 0
#Mounting D: and F:
UUID=CA1224B11224A503 /media/data ntfs defaults,uid=1000,gid=1000,windows_names,locale=en_US.utf8  0 0
UUID=92562FB9562F9CCB /media/extra ntfs defaults,uid=1000,gid=1000,windows_names,locale=en_US.utf8  0 0
---------------------------------------------------------------------------------------------

4. Finally save the file and exit.

5. Run the following command for the changes made in fstab to be realised.

$ sudo mount -a


Tada.. all the unnecessary drives are hidden and D: is auto mounted. You can crosscheck by restarting.
Enjoy.

If you love GUI-ways, ‘pysdm’ is a handy gui tool that can be used for all the above said things.

Install using the following command.

$ sudo apt-get install pysdm

References

  1. https://help.ubuntu.com/community/AutomaticallyMountPartitions
  2. http://community.linuxmint.com/tutorial/view/1513
Advertisements

Android from command line – Useful things

I was fed up of having to open up eclipse each and every time when I had to install my android package and start the activity. So, I read a little on the developers documentation and found some useful commands. Make sure android-sdk/tools and android-sdk/platform-tools are in your environment path to try these out.


 

1. Updating the project

Suppose you have upgraded android SDK, or changing systems or updating the targetted android platform, you need to run this command.

android update project -n ProjectName -p path/to/project -t android-XX -l path/to/included/libraries [--subprojects]

This command updates properties of the project and generates a file called build.xml in the project directory using the new values provided by you. build.xml is later used to build and package the complete android project and generate its apk.

Help can be shown by using -h or –help options. They are given below for reference.

-p –path : The project’s directory. [required]
-n –name : Project name.
-l –library : Directory of an Android library to add, relative to this project’s directory.
-t –target : Target ID to set for the project.
-s –subprojects: Also updates any projects in sub-folders, such as test projects.


 

2. Building and installation

After updating the build.xml file, the next step is to build it and generate the apk. This can be done by using ant. Now, this can be done in two ways. Either create a debug version, to quickly install to emulator for debugging, or create release version for releasing on playstore. In the first method, debug version, android signs the application with a debug key. This apk is not eligible for releasing. For release version, no signing happens. You have to manually sign the application using keygen in order to release it.

ant debug
ant release

Now, many a times you encounter an error similar to this : “Invalid resource directory name … resource ‘crunch’ ”
This can be removed by using the clean option.

ant clean

For installation of the the generated apk to the device or emulator, use the following

ant debug install
ant release install

You can also combine multiple command in order of execution.

ant clean debug install

Now, I was facing this wierd error during ant install that I had multiple devices. This is easy enough while using adb command. Simply use option ‘-e’ to install to emulator, ‘-d’ to device and ‘-s serial_no’ to install to specific device/emulator. After searching on the internet, I found that this could be done in ant also by defining a flag

-Dadb.device.arg="-e" for emulator
-Dadb.device.arg="-d" for device
-Dadb.device.arg="-s serial_no" for specific device/emulator
 
ant -Dadb.device.arg="-e" install

 

3. Logging

Now the apk is installed to required device, emulator and you want to check the logs. No, not eclipse once again. There is an option to use DDMS. But I prefer the terminal. So, that is what I am going to use.

adb logcat

Now, this command will dump whole of your device/emulator logs on you terminal. Who wants that? It is not humanly possible to go through that amount of data at once. One of the options could be to log the logcat to a file and then read the file at your leisure. For that, use

adb logcat -f filename.log

The smarter option in this case would be to filter the logs. Tags and log priorities can be used for filtering the log. Tag can be TAG used in various activities or Package name of the application. Log priority is one of the following in increasing order of priority

Verbose (V), Debug (D), Info (I), Warn [W], Error (E), Fatal (F), Silent (S) (Nothing is printed)
                                        ———–> Priority Increases ———–>
adb logcat TagName1:D TagName2:W TagName3:E

As can be seen, multiple tag names and log priorities may be combined. Use ‘*’ as wildcard. For ex. *:S silences all the logs.

Now, you may want to view additional informations with the log message, like process ID, thread ID, time, tag etc. logcat allows some formatting options. Some of these are

brief – log priority/tag, the process ID and message is shown. This is default.
raw – Only the message issued is shown
time – Only the time and log priority of the log is shown
long – All the metadat is shown. Two logs are separated by a blank line.

There are others. You can checkout the developer documentation for all of it. The format of the command is

adb logcat -f brief/raw/time/long

But this is not over yet. I play around with prebuilt native libraries a lot. And adding android log commands to them is a pain. Android by defaults sends all the print commands meant for stderr and stdout to /dev/null. If there would be some way to see those logs, it would be awesome.

There is a way. Use setprop option of android shell.

adb shell stop
adb shell setprop log.redirect-stdio true
adb shell start

Now, all the logs sent to stderr and stdout are now shown with Info priority.


 

References:

1. http://developer.android.com/tools/building/building-cmdline.html
2. http://developer.android.com/tools/projects/projects-cmdline.html
3. http://www.alittlemadness.com/2010/06/15/android-ant-builds-targeting-a-specific-device/
4. http://developer.android.com/tools/debugging/debugging-log.html

 

GSOC ’14 : A new beginning

I have been selected to Google Summer of Code ’14. I will be working with Subsurface actively for the next two months. It was the outcome of 2 months of effort that I had put on this. I am ecstatic.

It all began last year when I came to know about students from our college getting selected in GSOC. I didn’t know about it back then. But on researching a bit, I decided that I had to give it a shot next year. It stayed on in the back of my mind. So, when it was announced finally, I jumped up and started working towards it. It involved weeks spent in the library reading and understanding code, picking out tickets and issues, submitting patches for them and so on. Some days I felt low as things didn’t seem to be going right. But I had friends working with me who raised my spirits and mentors who helped me at every step. Other days, specially the ones when my patches we merged, I was euphoric. I remember the childlike happiness of my first merge. I owe my success in selection to the mentors for providing technical support and pointing me in the right direction, my friends for being with me both during the times of joy and sorrow and to my parents for their wishes and support. Without them, it would have been difficult.

I didn’t really appreciate open source until I became a part of it myself. The kind of selfless work these people do, it is mind boggling. Now, that I am a part of it, I don’t think I’ll be leaving it. I hope to give back to the community as much as I can.

The benefits of open source are endless. I have exposure to some of the best written codes. I am in contact with some of the best coders throughout the world. I even had a reply to one of my patches by Linus Torvalds. Most importantly, it gives you a sense of happiness for having done some good to the world and it gives you a sense of belonging to a community to something bigger than yourselves. The list goes on …

I hope to learn as much as I can during the following two months. GSOC has opened a lot of doors for me. It is changing my life every single day. It is a new beginning for me, a new phase of my life. I hope to discover more of myself during this phase.

I hope you all spread the goodness of Open Source and Google Summer of Code far and wide so that more people become a part of it. Cheers.

Setting up Quassel IRC on Amazon EC2

Internet Relay Chat is one of the best tools for discussions.
It gives you the option to be as open or as encrypted as you want. It allows you to stay fully anonymous while participating in chatrooms on almost any topic. It allows hidden chatrooms, chatbots, and scores of other things.

It is specially suited to needs of the open source community and has indeed been harnessed to its full extent. I have come in touch with open source community recently while preparations for the Google Summer of Code ’14. It proved to be an all powerful tool to stay in touch with the company you want to participate with and your mentors and other participants as well. You can dive straight on and introduce yourselves to others or you can hold back, idle, eavesdrop on conversations and gauge others.
Nevertheless its an invaluable tool.

But I noticed a few things that could have been enhanced my experience with the IRC. Let me give some background. The college in which I am presently enrolled has an extremely crappy and inhibitive internet access. I think that those people responsible for the internet service for our college are unqualified, unprofessional and living in the stone age. I have a lot to say but I’ll vent my anger towards the administration in another post (maybe). Suffice it to say that I could not remain online for more than half an hour at a time. And when I did, the proxy connection refused to connect to the freenode server. What this meant was frequent breaking of connection, leaving in between important conversation and no means to idle on – stay on channel just to hear what discussions are going on. I had to use webchat.freenode.net to connect to the community.As the chatroom didn’t have a logbot, I had to copy paste them and store locally in text files to keep track of important conversations. Logging of conversations would be offered by desktop IRC clients but the proxy connection refused this connection. Also I would get disconnected after an hour or so and won’t reconnect.

If only some tool were present which stayed connected to the chatroom I needed to idle on 24×7 even in my absence and gave me all the exchange of conversation when I need.
Quassel provided exactly this. I think its revolutionary. It divided the whole setup in two parts: the core and the client. The core should be kept somewhere it could stay online and always connected to the required IRC server. Valid options would be a desktop always on and connected to the network, like a work computer or a real server or a virtual server.

For me a virtual server was the most valid option. Virtual servers are offered today by Amazon and Google among others in form of Amazon EC2 and Google Compute Engine respectively. I chose Amazon EC2 by chance and it has given me no chance of complaints yet. Let us begin the setup.

Step 1. Make an Amazon Web Services account.

Go to http://aws.amazon.com/ec2/ and register yourself. Unfortunately you have to have a credit card for this step. But you get a free tier with limited usage of the Amazon EC2 architecture for one year. If quassel is the only thing you are going to use Amazon EC2 for, then free tier provides ample resources. On completing the registration visit the AWS console – http://console.aws.amazon.com/
You are all setup for launching the quassel server and using IRC as never before.

Step 2. Launch the Virtual Server

On the console, in the EC2 Management Console, chose the option to launch a new instance present in the Instances tab. You would be pointed to a wizard to setup a new instance.

Choose a Amazon Machine Image
I chose Ubuntu Server 14.04 as I had previous experience with Ubuntu environment. Also, I found out that Ubuntu Server has quassel-core in its software repository unlike the other machine images.  Just cross check that the AMI you have chosen is Free Tier Eligible.

Chose the Instance type
Choose t1.micro instance as it is the only one eligible for free tier. t1.micro is sufficient for quassel usage.

Chose other configuration options
It is generally safe to leave the default choices.

Storage options
8 GB storage provided is sufficient.

Tag Instance
Here you can add tags to this particular instance like name and other metadata. You don’t really need this for Quassel setup.

Configure Security Group
Now this is a crucial step. You have to create a new Security Group. It decides the permitted incoming connections for your instance.
SSH on port 22 would be enabled by default. This is required for communication with the remote instance by the admins.
Besides this, you need to make two new rules.
1. A custom TCP rule on port 4242 for the inbound connection for quassel service.
2. A HTTP rule on port 80.
In the source column, you should add the permitted IP address ranges. The IP ranges you are expected to connect from. This is for security purposes. If you don’t have static IP or don’t want to add the range, leave it to 0.0.0.0/0 which allows connection from any IP.

Review
Review the settings for your instance and click on launch.

SSH private key
Make a new SSH private key for your instance if you are doing it for the first time. Give the key a name and download it and keep it in a location that you would remember. I keep in a ~/.cert directory. This key would be required to ssh into the server.
If you already have a private key, you can select that.
Launch
Finally, launch the instance by clicking on the Launch button.

You have now a working instance on Amazon Web Server. Please note its public IP on the instances tab of the console.

3. Setup quassel-core on your server. 

For doing this, follow the following steps.

SSH into your remote server.

Let the public IP of your Amazon Web Server be 123.45.67.89 and the location of the key file be ~/.cert/myWebServer.key

Then, to ssh into the web server, open up a terminal and enter the following command.

    $ ssh -i ~/.cert/myWebServer.key ubuntu@123.45.67.89

If you are doing it for the first time, you would get something like this

The authenticity of host ‘123.45.67.89 (123.45.67.89)’ can’t be established.
ECDSA key fingerprint is ab:cd:ef:gh:ij:kl:mn:op:qr:st:uv:wx:yz:12:23:34.
Are you sure you want to continue connecting (yes/no)?

Key in yes and press enter. You’ll see the message

Warning: Permanently added ‘123.45.67.89’ (ECDSA) to the list of known hosts.


And after that you’ll be logged into your remote server. You’ll see something like this.


Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic i686)

 * Documentation:  https://help.ubuntu.com/

  System information as of Sat May  3 00:00:00 UTC 1989
  System load:  0.0               Processes:           72
  Usage of /:   12.0% of 7.75GB   Users logged in:     0
  Memory usage: 8%                IP address for eth0: 987.65.43.21
  Swap usage:   0%

  Graph this data and manage this system at:

  Get cloud support with Ubuntu Advantage Cloud Guest:

0 packages can be updated.
0 updates are security updates.


Last login: Sat May  3 07:09:35 2014 from 45.67.78.90

ubuntu@ip-123-45-67-89 $

Now, first update and upgrade your server.

    $ sudo apt-get update && sudo apt-get upgrade

Install PostgreSQL on the server

Quasselcore needs a sql backend. Out of various available options like sqlite, PostgreSQL, etc PostgreSQL is favoured as it handles huge amounts of data better.

So, for using PostgreSQL, first install it using the command

    $ sudo apt-get install libqt4-sql-psql

For further references, read this and this.

Install quassel-core

Now, install quassel-core on your system

    $ sudo apt-get install quassel-core

Quassel-core will be installed and automatically start running on your server.
If you notice that sqlite backend is chosen for your installation, switch to PostgreSql by entering the command.

    $ quasselcore –select-backend=PostgreSQL

4. Configure your core by connecting it with a client.

For configuration of your quassel-core server, you need to connect it with a client.
In your desktop, install quassel-client.

For ubuntu users
    $ sudo apt-get install quassel-client

For fedora users
    $ su -c “yum install quassel-client”

Run the client.
In the quassel-client gui, go to file->connect to core
Add a new core.
  • Account Name : Write any suitable name for your core
  • Hostname: Enter the public ip of your amazaon ec2 server
  • Port : Default is 4242. Leave it the way it is.
  • User: For the first run, you can enter any username or password, the core will ask for username password again for configuration. For consistency, enter the username and password you want to set on your quassel-core.
  • Password:  Enter the password you want to set on your quassel-core.
  • Proxy Settings: Enter the required proxy settings.
Press OK twice. The client will now attempt to connect to your remote Amazon EC2 server.

On the first run. You’ll be asked to accept the security certificate of your server. Accept it. Then quickly configure settings of your core.

Enter the username password that you want to setup your core with. Enter other required settings like server you want to connect to. Ex. freenode.

5. Start IRCing… Stay online 24×7

Voila.. You are all set. You would be logged on to an IRC server like freenode. Enter any chatroom you wish to join . For ex. #quassel. And you would stay online as long as the server is running. Whenever you want to see your messages, or chat on the channel, fire up your quassel-client. Enjoy IRCing…

If you are facing any kind of problems, don’t hesitate to join #quassel on freenode and clarify your doubts. The people there are very helpful. I was stuck at many places and they quickly sorted everything out.

Incremental History Searching in Ubuntu

To search your Ubuntu history incrementally. Use the following steps:
1. Create and open .inputrc by using command
gedit ~/.inputrc
2. In the file that opened add the following commands
“\e[A”: history-search-backward
“\e[B”: history-search-forward
“\e[C”: forward-char
“\e[D”: backward-char
Save and exit.
And you’re done.
Suppose I have executed a command in past
sudo apt-get update && sudo apt-get upgrade
For repeating my command all I need to do is type sudo and press up button and the whole command appears again.

Proxy settings for Ubuntu

Proxy settings in Ubuntu

It is a pain really when you have an authenticated proxy and your applications don’t really work with them.
So, I have made a list for reference, containing all the places where one should set proxy username and password so that almost everything works. I don’ t really know the reason for each as I am a rookie. I’ll keep the list updated and add the reasons when I get to know about it.

So, follow the following steps :


1. Setting System Proxy

Go to System Settings >> Network >> Network Proxy
Change method to ” Manual ” and add the HTTP, HTTPS, FTP and SOCKS proxy as required.
Click on ” Apply System Wide “, enter your password and click OK.


2. apt.conf

apt.conf controls the apt functions. When proxy is added commands like “sudo apt-get update” and “sudo apt-get upgrade” goes through the proxy. But remember “add-apt-repository” doesnt work with proxy as far as I know. So setting proxy in apt.conf would not help.

Open terminal and type this command

sudo gedit /etc/apt/apt.conf

Input your password.
In the file that opens up, insert the following lines

Acquire::http::proxy “http://user_name:password@proxy_host:proxy_port/”;
Acquire::https::proxy “http://user_name:password@proxy_host:proxy_port/”;

Edit above with your Proxy host and port and authentication.


3. environment

Open your environment file by typing this command in terminal.

sudo gedit /etc/environment

Add these lines to the end of the file, save and exit. Inset your Proxy username, password, host and port at appropriate places.

http_proxy=”http://user_name:password@proxy_host:proxy_port/
https_proxy=”http://user_name:password@proxy_host:proxy_port/


4.  ~/.bashrc

Open your .bashrc file in the root folder by executing the following commands.

 gedit ~/.bashrc


Add the following lines at the end adding your details as required.

export http_proxy=”http://user_name:password@proxy_host:proxy_port/”

Notice the quotes in above command.
The easiest thing to do will be executing the following commands:

echo “export http_proxy=\”http://user_name:password@proxy_host:proxy_port/\”” >> ~/.bashrc

echo “export https_proxy=\”http://user_name:password@proxy_host:proxy_port/\”” >> ~/.bashrc 

Follow up with this command in the terminal. It will run the .bashrc script once setting the http_proxy and https_proxy variables .

source ~/.bashrc


It can be checked whether the proxy is set by typing in the terminal these commands one by one. If in the output you get your proxy settings, you are set.


echo $http_proxy

echo $https_proxy

Hopefully after these many steps, all the applications that do not have individual proxy settings will be routed through the authenticated proxy. Enjoy.



Note:
To remove proxy from everywhere, do this
1. Go to System Settings >> Network >> Network Proxy. Set the Method to ‘None’. Click on ‘Apply System Wide’. Enter your password when asked.
In this step, the proxy settings from /etc/environment and /etc/apt/apt.conf would be removed. Only the proxy at ~/.bashrc remains.
2. Open ~/.bashrc by

gedit ~/.bashrc

Remove the lines beginning with “echo http_proxy=”  and “echo https_proxy=” 

Restart your system and you are now proxy free.

Android Application Developement in Ubuntu

Step 1. Prepare Ubuntu for Android Application Development

Installing Java in Ubuntu is the first and the most crucial step. This can be quite tricky. 
The simplest method is through webupd8 repository. For this, open up the Terminal in Ubuntu by pressing CTRL + ALT + T. Type in the following commands in order. 

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer

Accept the terms and conditions along the way. All the information required are setup from this installer. It can be verified from running this command in the terminal.

java -version

It should give output in the format :

java version “1.7.0_25” 

Java(TM) SE Runtime Environment (build 1.7.0_25-b15) 

Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) 

This means Java is setup correctly on your Ubuntu. Go here for further information.
Now that Java is setup move on to the next step.

Step 2. Download All

To work on Android application development the simplest way would be to use an IDE (Integrated Development Environment). Google recommends eclipse. Download the latest version of eclipse available on this page
Extract the zip file downloaded to a specific location, which we’ll call eclipse directory. Open eclipse by double clicking on the eclipse icon in the eclipse directory. Choose suitable workspace and let it begin. After eclipse opens up, set up the proxy settings, if needed, by going into Window >> Preferences >> General >> Network Connections.
Then update eclipse by going to Help >> Check for Updates

After updating install the Android Development Tools by going to Help >> Install New Software
Fill ” ADT Plugin ” in the name and ” https://dl-ssl.google.com/android/eclipse ” in the location. Select all, accept the licence agreements and finish. Let the tools install. Restart eclipse when asked. If it is not working, download the zip of ADT from here and instead of adding the above link, give the path to the ADT.zip.


This process can be shortened by downloading the ADT Bundle from this page.  Just extract it and you’re good to go. You have to add the currently available SDK and platform tools for the android versions that are to be targeted. For this click on SDK Manager icon

on the top of eclipse. Select the required sdk and tools required for your purpose. In my opinion, Tools and Extras should all be installed. Also latest SDK should be added.

After installation of ADT and SDK and platform tools, this step is finished.

Step 3 : Check out eclipse and make your first application.

Launch eclipse. Go to File >> New >> New Project >> Android Application Project
Give suitable name and icon. Chose default view and navigation. A default hello world application is made.
Start android application development.
A valuable resource will be d.android.com.

Step 4 : Running adb. 

Adb or Android Debug Bridge is located in android-sdk-linux/platform-tools/adb
But it rarely works correctly if you are setting everything up for the first time. 

The first issue that comes up is that adb is not recognized by the system. 
On executing ./adb
Error: No such file or directory present 
pops up.

The reason behind this is that adb included with the other tools in the ADT Bundle or SDK Tools, is 32 bit and most probably the system you are working is 64 Bit. To remedy this, run the following command in the terminal.

sudo apt-get install ia32-libs  for Ubuntu 12.04

sudo apt-get install lib32z1 lib32ncurses5 libstdc++6:i386 for Ubuntu 13.10 and beyond

This should start the adb.

The available devices can be seen by running :

adb devices

The next challenge is that the adb doesn’t recognise the USB device plugged in. It shows this on runnning “adb devices” command. 

????????????? Permission Denied

To remedy this, run the following commands in the terminal. 

sudo gedit /etc/udev/rules.d/51-android.rules

On opening text editor, paste the following lines in it. Save and exit.

SUBSYSTEM==”usb”, ATTR{idVendor}==”04e8″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”04dd”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”054c”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0fce”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”2340″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0930″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0502″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0b05″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”413c”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0489″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”04c5″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”091e”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”18d1″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0bb4″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”05c6″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”04da”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0471″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”1d4d”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”10a9″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”19d2″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”201E”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”109b”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”12d1″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”24e3″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”2116″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0482″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”17ef”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”1004″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”22b8″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0e8d”, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0409″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”2080″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”0955″, MODE=”0666″, GROUP=”plugdev”
SUBSYSTEM==”usb”, ATTR{idVendor}==”2257″, MODE=”0666″, GROUP=”plugdev”

These commands tells the system the Vendor ID of all major android manufacturing companies so that adb could be run properly on all devices.

Follow it up by this command :

chmod a+r /etc/udev/rules.d/51-android.rules

Now, almost all devices are recognized by adb.

Refer to this page for further reference on adb and udev.