Thursday, 25 December 2014

I've moved on!!

Hello everyone!!

I have decided to move on . My new blog is hosted at http://light94.github.io/ . I have shifted to jekyll for my blog and have hosted it using github-pages. The combo is free just like Google's Blogger. It offers an additional advantage to create your post while you are offline and when its done , just push it to github. 

Jekyll is a parsing engine, that transforms documents written in simple formats like markdown or textile to HTML pages. It uses the Liquid templating engine which would look very familiar if you have worked with templates in Django

To get a quick start, I used jekyll bootstrap . It creates the directory structure desired by jekyll and gives a default theme ( there are other themes also to choose from). So, you can just begin writing content. It is opensource and hackable.

The developers of jekyll bootstrap have made a single page document explaining jekyll and liquid here . It gives a clear picture of everything and gives you a headstart . 

In addition, jekyll gives the functionality to import blogs from other platforms. So, I imported all the blog posts I made here and ported it to github pages.

The page here https://pages.github.com/ will guide you to begin using github pages.

Cheers!!

Wednesday, 17 December 2014

Disabling Proxy on Ubuntu 14.04

It appears that disabling proxy settings is buggy on ubuntu (till atleast ubuntu 14.04). Why i say this is because when we disable proxy through GUI like this:


we would think that we have completely removed proxy settings from the system. But it does not appear to be so because in most cases apt-get still uses the proxy settings to fetch repositories and this is quite a trouble. There is a workaround if you do not want to clear the proxy settings from apt-get , and that is:
1) Download the package straight from the wesite ( you can reach there by simply naming the package name on google )
2) Open terminal, cd to that directory and run sudo dpkg -i <package_name>


I have tried the above method and it works good, but I don't prefer it. I like apt-get doing all this work for me (:P).

So, I searched a lot for solutions to my problem and implemented the solutions proposed and none seemed to work except this:

1) $ sudo gedit /etc/apt/apt.conf  (this will fire up gedit)
2) Clear the file and add the line acquire::http::proxy "false";
3) Save the file and exit.

Now, even apt-get will understand that there is no proxy between you and the internet and will connect directly.

Cheers!!

P.S: If your problem is just the opposite, i.e apt-get does not use the global network proxy settings you provided, see the solution given here .


Sunday, 14 December 2014

Daemon Process in short

I have compiled this post to give an overview of what a daemon process is.

A daemon is a background process that is designed to work without user intervention. It does not have a terminal (being background). Usually daemon names end with a 'd' but that is not a restriction. A daemon process waits for an event to occur or some condition to be met. To delve in further, it is necessary to know what a process identifier or process ID or PID is. On a Unix like operating system, a PID is a identification number is a unique, non-negative number assigned to each active process. PIDs are generally large numbers like 7732 , 4483 but this does not mean that there are so many processes running on the system. It is so because PIDs are not immediately reused for preventing possible errors. You can test it immediately, by opening shell and typing in ps. If that is a new shell, it will show only 2 processes i.e bash and ps and generally the PID will be a large number.

Now, there is one special process called init. This is always the first process to be initiated when the system boots and it remain as long as the system is on. It is assigned a PID 1. This means every process will have init in its process hierarchy. The parent process of a daemon is mostly init (PID 1) but this is not always true, specially when you launch a daemon process using terminal. You can test it when you create a daemon and use the following command to get its PPID i.e parent's PID  as given here .
ps -o ppid= <pid>

It won't necessarily be 1. 

You can know what process corresponds to a given pid by ps -j pid . So you can verify if it holds true for init.

Here is an important question that i asked regarding killing of daemon process http://codereview.stackexchange.com/questions/73613/monitor-downloads-folder-and-sort-files-as-they-are-downloaded

Compiled from
1. http://www.linfo.org/daemon.html
2. http://en.wikipedia.org/wiki/Daemon_(computing)
3. http://askubuntu.com/questions/153976/how-do-i-get-the-parent-process-id-of-a-given-child-process
4. http://www.linfo.org/pid.html


                          

Friday, 12 December 2014

A SCRIPT that makes your Downloads directory neat!!

I made a utility  for myself that will help me find downloaded files more easily than before. It is a shell scripts that analyzes the filename of the downloaded file and places it in a directory named by its extension. This means a pdf file will go to a directory named pdf as soon as it is downloaded. This works for all file extensions. I intend to make it better . It is going to help those whose Downloads directory is always cluttered (like mine :P ). You can get it at https://github.com/light94/myShellScripts/blob/master/sortDownloadedFiles

Place the script in /bin directory and Just run this script like

                     nohup sortDownloadedFiles &

and close the terminal. The script will keep running in the background and will keep looking on your Downloads directory.

And if you want to know how to stop the script. Check it here .
What you need to do in short is :

ps -aux | grep sortDownloadedFiles

killall sortDownloadedFiles inotifywait



Process, Subprocess and Python

A process is an instance of a computer program being executed.
When you run a code, an instance of the program is created. The instance contains the program code and system state like pointer to the next instruction to be executed, memory etc. When you run the same code more than once concurrently, you are actually creating multiple instances of the same program.

A SubProcess is a set of activities that have a logical sequence that meet a clear purpose. A SubProcess is a process in itself, whose functionality is part of a larger process. A new process that a program creates is called a subprocess.

Straight from the python docs .:

The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.

As earlier said that a subprocess carries with it its own system state, so we note in the above definition that every subprocess has its own defined input/ouput/error pipe.

 Subprocess spawns new processes, but aside from stdin/stdout and whatever other APIs the other program may implement you have no means to communicate with them. Its main purpose is to launch processes that are completely separate from your own program. 

In python, the subprocess module lets you run programs that are different from your program. For example, you wrote a python installer to install the package you made. Now, your package has certain dependencies. So, you can either tell the user to install them manually or you could install it by creating a subprocess like:

import subprocess
subprocess.call(["sudo","apt-get","install","<package-name>"]) 

Note that the argument can be a single string or a sequence, but the string implementation is platform independent. So, i think it is better to use the arguments in a sequence.
The call , check_call , check_output are all convenience functions that are based on the Popen interface. call() returns the exit code of the program and you need to check if there was an error or not. However, check_call raises an exception if there was an error. check_output also raises an exception if there is an error and additionally returns output in the form of byte string.

Now, when to use these convenience function and when to use the basic Popen ? Here it is. Let me summarize, the convenience functions are wrappers around the Popen interface with the default functionality that they wait for the subprocess to get finished i.e they block further execution of main process until the subprocess has either finished or timedout. You can look at the implementation of subprocess.py here. 

Thursday, 11 December 2014

XSS and Same Origin Policy

To understand XSS we first need to understand what is Same Origin Policy.

Same Origin Policy is a web security related guideline,implemented by web browsers(i.e on the client side) that allows scripts running on pages that belong to the same website access each other's DOM but not pages that are a part of other websites. This also means that all scripts that come from the same website will be trusted alike and will be granted the same permissions, in general: access the cookies etc.

So, is it possible that you host a website and when a client is viewing it, certain scripts are running without you knowing about it?


YES, and this is what XSS is. What actually happens is that the attacker causes additional scripts to run on the clients machine in order to access the authorization cookies for that website. The website was not designed keeping in mind url sanitization in mind. Say for example, you have an online account on a website. The website is vulnerable to XSS, so the attacker provides the client malicious urls of the form http://www.mysite.com?q=mysearch<script>//access user cookie for this particular site and mail it to me</script>. This url could be provided in various forms like an advertising mail delivered to your inbox, or the recently popular form, as in facebook, where some people convince to tell you who had visited your profile by pasting some "magical" url in your console.

Now, what happens is the part after the ? is searched for in the website. If the input is not sanitized, i.e you output the input as it is, you are also outputting the script. Now, you see,that you are not violating the same origin policy, because the script has been delivered by www.mysite.com (ultimately). So, the script has the same access as the other scripts. It accesses the authorization cookies and mails it off to the attacker. Now, the attacker imports that cookie to his own browser and so to mysite.com both the client and attacker are the same. This access can lead to undesirable consequences (You could go bankrupt from a millionaire in one day).

This is only one form of an XSS attack. Proper sanitization should be performed before processing inputs to help avoid XSS attacks.

  

Thursday, 4 December 2014

Tata Photon+ and Ubuntu

This was the first time I was going to use Tata Photon + on my Ubuntu installation.  I was worried if Tata Photon offered a Linux installer. I searched google using a variety of combinations and found a pdf that offered an method of using Tata Indicom usb modem on Linux!!



However, the installation required a package called wvdialconf which was not installed on my system and I had no way to install it without internet!!

So, hopelessly, i just inserted the usb modem in my laptop hoping for some magic to happen.
I went to network options and found this!!

This is however a step further when you are actually connected to the internet.  You need to click on the dropdown and you will get an option to Add a New Connection. Click on it and you will get an option for Tata Photon+. Click on it and you will get a screen as the pic above( if ofcourse you have balance in your photon account :P ). The next time onwards, ubuntu will remember it and you don't have to do it again. Just go to list of available networks and you will find Tata Photon+ listed there.!! 



Monday, 1 December 2014

On my way to install memcached

Installing memcached on my laptop has really been a pain till now. Nothing seems to work as expected. Although memcached is installed , which i see by testing it on the console, it doesn't show up in  phpinfo()  .

While installing memcached , i changed some defaults( unintentionally) , which included apache to run on system startup, so lampp would not start. Here is the solution

when debian7 start,it will start apache2. so you should stop it first,than try to start lampp.

/etc/init.d/apache2 stop
/opt/lampp/lampp restart


This is however a one time solution. No worries!! 

Change the defaults, use this:(ubuntu >11.04)


echo "manual" >> /etc/init/mysql.override
update-rc.d -f apache2 remove


Will update this post when memcached is installed!!


UPDATE!!

After numerous attempts at installing memcached, i uninstalled lampp and installed it again, I didn't uninstall memcached however, and somehow when i restarted wamp, I could see memcached in the phpinfo() !!

This might not work in every situation, but it offers a possibility in case all other methods fail.


Friday, 28 November 2014

Fixing dependencies and autoremove packages in ubuntu

If you are a developer, then you might install various packages and remove them when work is over, or you get a better solution. However, most packages are not standalone, i.e they have dependencies on other packages, which apt-get  installs automatically. apt-get is really a very nice package manager. It internally keeps a list of what packages were installed automatically and which were manual. So, if you remove a package you manually installed, you might not want the other packages that were installed as a part of the dependency. 
    
     Here is a very nice answer from stackexchange:

Now apt implements an Auto-Installed state flag to keep track of these packages that were never installed explicitly. When you uninstall a package you can add the --auto-remove option to additionally remove any packages which have their Auto-Installed flag set and no longer have any packages which depend on it being there (a package may also be kept if another suggests or recommends it depending on the value of the APT::AutoRemove::RecommendsImportant and APT::AutoRemove::SuggestsImportant configuration options).
I would have a look at the list of packages and decide if they are worth keeping, sometimes packages which you may wan to keep are marked Auto-Installed by default. You can get information on what the various packages do by doing apt-cache show package_name. If you decide to keep some, you can use apt-mark manual followed by the names of the packages you want to keep.
Note that usually you would want to have library packages (most packages beginning with lib) marked as Auto-Installed since there are few reasons to have these packages installed on their own - other programs usually require other libraries to run, but they are little use on their own. Even if you are compiling software against the library to need the development package (ending in -dev) which depends on the library itself, so no need to explicitly install the library.
Also using aptitude, you can do aptitude unmarkauto from the command line or change within the curses interface. Within the package lists in the interface, all automatically installed packages have an A next to them. You can change this state by using m to mark an auto installed package as manual and M to mark as manual again (also l to open a search dialog and Enter to view package details)
This pretty much explains everything.

 Now, sometimes it happens that a package is installed without its dependencies, so obviously it does not work. apt-get keeps a list of the unmet dependencies.  You can resolve theses dependencies ,i.e install the packages required by

                                           sudo apt-get -f install

This will install the dependencies and your package will work as expected.


Now, as said earlier, how to get rid of packages that were installed as a part of dependency and no longer required? Here it is

                                          sudo apt-get autoremove

This removes the packages and frees up disk space.





Saturday, 22 November 2014

Django. Django Admin Page and Phpmyadmin

I have been developing web apps for quite some time in Django. I would like to discuss here some things which might not have struck you while developing with Django.

Django comes with a inbuilt admin page, Once you register your model in admin.py , you are set to use the admin page for adding /deleting records in your database.However you might feel constrained by the functionality which this admin page provides, as I did. Since I come from a PHP background,  I badly missed phpmyadmin and its rich features. It feels quite strange to see  PHP and Python associated (it did for me).

    One thing is for sure, to run phpmyadmin, you need a server that supports PHP. So, first of all you need to know that you need to have your wamp/xampp/lamp server running in order to access the database. The default ip for wamp is 127.0.0.0 and that for django inbuilt server is 127.0.0.1 , so running both of them would never be a problem. Now you need to configure the settings.py file inside of your project with the database details like


                                             'ENGINE': 'django.db.backends.mysql'
and change the username,password etc. to access the database. Don't forget to run manage,py syncdb after this.  Now, how to make phpmyadmin notice this?

Head over to config.inc.php in htdocs directory of your wamp installation and provide the connection details to the django built in server in the manner it has been done for the apache server above . Remember to put the $i++ before the configuration details.


Now, restart apache and go to phpmyadmin. You will get an option to select beween the 2 servers. Choose 127.0.0.1 (django) and you will find the database sitting right there.

The bad thing about this is:
You cannot modify the database structure. You can only add/update/delete records from the database. There must be a tool for overriding this, so I will update later!!

Handling change in usernames, in a better way!

Some websites that have account for users , give an option to change their username. It appears to be a trivial task. I have implemented this functionality in 2 of my projects . The functionality i used was to ask a LOGGED IN user for the new username and would ask him to reenter the password. This is a very simple approach and works well for small applications that do not have very much functionality in terms of databases.
     
     I recently changed my username on Github and was wondering what procedure they followed. I think it is highly improbable that they use the usernames that we see internally in databases too. What I mean is that suppose Mr. XYZ does a commit in his repo in Gihub (XYZ is the username), so in their database that stores the history of commits, XYZ is not stored, rather a hash( for more security) or a simple unique number is stored. Their would be a user table where each such number or hash uniquely determines a user and does not change anytime, no matter how many times the user changes his username. Whenever some information is displayed that should identify "who ? ", they would simply fetch the username associated with the unique number or hash (of course they could store this number in a session variable to avoid querying the same thing). This gives the advantage is that you don't have to traverse through your entire database and substitute the old username with the new one. You simply change the username in the user table and the unique identifier(the hash or the number) remains the same. It would greatly bring down computations required for trivial task like this. 

Wednesday, 19 November 2014

Processing Feedbacks

It's a day before the endsemester exams and I can't help thinking about how the ERP team evaluates my feedback about professors. 10,000 + students. One obvious way which I can think of is- It gets a overall rating of each teacher. If the overall rating is good enough, I don't think they worry about the suggestion. If there is a poor rating, they would manually take some random suggestions (if they take at all).

Improper and inefficient I would say (if at all this is the process).

I am thinking of using Python's Natural Language Toolkit for this purpose. I will look for words like "good" "bad" "should" and get their context like "good at *" "bad at *" "should do *". Similar suggestions (a group of students giving similar suggestions) would be forwarded to the teacher to improve the classroom experience. Further updates after the endsem.

Saturday, 15 November 2014

Ordering your Notes

You are a student and did not make notes properly for a particular subject? And you took photos of your friend's notes to study? Then this post is for you.

I too had such a situation once. One thing to highlight is the naming convention by most devices. Most phones and cameras name the shooted images as yyyymmdd_hhmmss .

Obviously, this naming convention makes sure that no two photos have the same name .(I have not tested what happens when you deliberately change your device time to an earlier time which corresponds to an already present image on device, though it is very difficult to remain accurate to the second.)

Python to the rescue.

My method is not efficient but it is quick and "quick" is what i wanted. So,  i sorted out the names of the images on the basis of when they were clicked (to be more accurate, time when they were clicked) and then renamed them with integers starting from 1 .

This way you can easily remember what page you were on when you last left.

My code is available at https://github.com/light94/rename-notes


UPDATE:
I have updated my code to rotate the images by 90 degree so that everytime a new image comes, we don't have to manually rotate it.

Note: One important thing that I learnt from this was that python's os.listdir() generates the list randomly if no parameters are passed to it. 

Tuesday, 11 November 2014

Fixing Permissions of htdocs in Ubuntu

If you are using xampp, you might already know that the default location of your projects will be /opt/lampp/htdocs . However you cannot edit contents of this folder without root permissions. However, using the root account for trivial things like editing files is a very bad idea since you can do harmful changes to your system unknowingly.

There is a very detailed and beautiful description of how to avoid this in this blogpost.

The idea is, we need to change the ownership of the folder to something other than the root. No more sudo for triviality.!! Go ahead and read that article.

UPDATE: The -R flag to chown applies command to all the subdirectories( recursively) as well.

Sunday, 9 November 2014

Browser and the Event Loop

This is an extension of my last post and I wrote it after watching Philip Robert's speech.

A browser is constrained by what you are doing with the Javascript. The browser would like to repaint /render the screen 60 frames per millisec, but it can't render if there is code in the call stack. The render is like a callback with the difference that it gets a priority among callbacks. That is, if the stack is empty and there the render is in queue, it will be executed despite the fact that any other callback is before it.

While render is blocked, you cannot select text on screen, click on a button and see the response etc. That is why we should not put slow code on the stack . We should not block the event loop.

V8, Javascript, Event Loop

This is what i understood form Philip Robert's speech at JSConf EU 2014
V8 is Google's Open Source Javascript Engine. It is written in C++ and is used in Google Chrome.

The Big Picture,




i.e Java runtime, webapi(extra things which the browser provides), event loop and the callback Queue.

Javascript is a single threaded runtime, i.e it has a single call stack (it can execute only one piece of code at a time).
(Note: A call stack is basically a data structure which records where in the program we are.)

Blocking - Essentially, it means code that is slow and therefore preventing execution of statements after it. For example a while loop, which runs for say 100 times. Until the while loop is over , statements after it won't be executed. The browser cannot render anything until there is some process in the call stack.


The solution to blocking - asynchronus callbacks.

Now, the browser is more than the runtime as shown in the picture. It has other components like Web API, event loop which help in concurrency,i.e , executing more than one instruction at a time.


Consider this piece of code 



If we look at the call stack, the callback queue and the event loop together, we see the following picture initially.


When we start the script,we find the picture to be this
When console.log('Hi') is executed,



Now, as we saw before, setTimeOut is a API provided by the browser, so setTimeOut(cb) sets off the timer and is then handled by the browser. So, just after the statement is executed, we find

that is, the setTimeOut is not there in the callstack any more and thus the next console.log() statement is executed and clears off after which the main() clears off (pops off).The browser is handling the setTimeOut. The WebAPI cannot simply push the command back to stack.

Here comes the role of the Event Loop . It looks at the stack and at the queue. If the stack is empty, it pushes the first thing in the queue to the stack where it gets executed.

Note: We use a timer of 0 to setTimeOut when we do not want the statement to be executed until the stack is clear.

setTimeOut is not a guaranteed time of execution. It is the minimum time of execution.



Saturday, 8 November 2014

Cursor Blinks in Linux ,Noun Not Verb

Of late, I had been troubled by the distinct behavior of the cursor in my Ubuntu setup. Symptoms?

1 .The cursor blinks for sometime and you believe that everything is going fine.
2. After a timeout, the cursor stops blinking and you begin to wonder and worry.


To be more precise here is what the original bug reporter said,(if you call that a bug)

When the cursor stops blinking, you start to wonder if something is wrong.

The point is that the cursor should never stop blinking in an active terminal window


Well, this is bug #838381.  

This is a general feeling. However, after coming to this page, I have now stopped worrying.

Let me sum up what the article said. 
The stopping of blinking of the cursor is not a bug rather a deliberately introduced feature to save energy.

So, even though the cursor is not blinking, your code is working just as it should, behind the scenes.Period.

Friday, 7 November 2014

Using Virtualenv in Python

Virtualenv is a tool to create isolated Python environments. This may be useful if :

  • You need to test some module
  • You are doing a project that makes use of a different version of  a package that is already installed.

So, I am installing pygame on my ubunutu and don't want this installation to be system-wide. Therefore I decided to use virtualenv.

To make a virtualenv, 
  1. Go to the parent directory and open terminal in that directory.
  2. Type virtualenv --no-site-packages <<a folder name>>  
  3. Then cd to the folder and type source bin/activate

This will create a virtual environment where you can install new python modules without the fear of it affecting the main python installation.

When your work is over, you can close the virtualenv using deactivate.

Thursday, 6 November 2014

Creating custom bash scripts

Often we across activities where the same set of actions need to be performed. These can be automated using bash scripts in Linux.

To make a bash script:
1) Begin the script with a shebang #!/bin/bash
2) type in the custom commands. Use $1, $2 etc. for passing in arguments to the script
3)Give it a name and save it in /bin   (Create a folder in home directory named bin)
4)Next , to make the file executable, type chmod +x <filename>

 Now you are ready to use the power of automation in linux. Have a look at https://github.com/light94/myShellScripts .

Wednesday, 5 November 2014

Hacking with Nautilius actions

I have been using Sublime Text for quite some time now and am an absolute fan of it. There is a very nice feature in it - open a folder . What it does is  that it gives you a list of all files in the folder in the package explorer pane. This makes it easy to select any file in the folder or editing, closing and reopening.


This is indeed very useful. But i thought of adding an alternative. Suppose, you are browsing through your projects and you stumble upon one of them and decide to work on it. It is painful(for me atleast) to open sublime text and then open in folder. What if there is an option present while browsing to open the folder in sublime ? Sounds cool.

So this link came to the rescue http://www.howtogeek.com/116807/how-to-easily-add-custom-right-click-options-to-ubuntus-file-manager/

Basically you need a tool called Nautilus Actions that you need to install by sudo apt-get install nautilus-actions. You then need to launch it and add a new action. Put a suitable name in Context Label (this is what will appear in the right click).  Then go to command tab and write subl in path and -a %F in parameters. subl is actually the command to launch sublime text from terminal . The -a argument adds folders to the current window and %F passes the name of the current dirctory to the command.


Then close nautilus and reopen it . Now when you right click on any folder, you will get the option to open it in sublime.

Tuesday, 4 November 2014

Onepiece Fans, Behold

After the script to download Naruto Shippuden episodes(https://github.com/light94/naruto), here(https://github.com/light94/onepiece) is the script to download Onepiece episodes. This is more customized than the previous.

Both websites offered almost the same structure and therefore the procedure is almost the same.

Facebook's Profile Picture Privacy Update

This is an update from an earlier post (http://justanotherlearner.blogspot.in/2014/08/facebook-bug-maybe-maybe-not.html) where I had misunderstood a feature of FB and pointed it out to be a bug.
Today, I came across the facebook privacy page (https://www.facebook.com/about/privacy/your-info#public-info)  where it is written that

"Your name, profile pictures, cover photos, gender, networks, username and User ID are treated just like information you choose to make public"



So, it is absolutely fine if you are able to download the profile pictures of others. However, FB still provides the option of keeping your profile picture private . There is certainly a discrepancy in information and it should be resolved.

Monday, 3 November 2014

Saving Passwords in Git

For git version >= 1.7.10
Entering the username and password everytime you do a git push is a waste of time. The following commands will save your credentials for a defined time period in memory.

To store your username and password, use the following commands

git config --global credential.helper cache

By default git stores password for 15 minutes. You can change this by

git config --global credential.helper 'cache --timeout=3600'

(If you want a timeout of 1 hour)

"Gotta" Catch'em All exceptions

So, I was recently doing a webscraping project. I had a database of people and I had to lookup and confirm their identity. Identity in the sense that they had either studied or taught in my college. The task can be done manually for a small set, but I had 7000 records!!

So I used xgoogle for this purpose. It is a Python Library for Google Search.
Now, after having completed the task, I can name a minimum 7 exceptions that can occur when you automate web lookups. They are :
SearchError (class in xgoogle that handles exceptions raised by google during searches)
HTTPError  (generally raised when there is some error in authentication, like trying to access a secured page of an organization, or when the url is http when it really should be https)
URLError (being a subclass of OSError, this error is related to system related problems like input/output . This includes error in fetching data from a url)
SocketError (Raised when there is some issue with the server eg. overload, throttle)
ValueError (Raised when the required datatype conversion is not permitted)
IncompleteRead (When the server closes the connection before the chunk of data has benn fully retireved.)
BadStatusLine (Raised when we request a webpage, and the server responds with a code that Python cannot understand)


Well this is not the complete list.

I had to automate the task in such a way  that I start the code before going to bed and I get it completed when I wake up. So, I decided to catch all :

try:
    my code
except KeyBoardInterrupt:
    raise
except:
 #handles all exceptions except KeyBoardInterrupt

This piece of code will ensure that unless you manually try to end your code, it will not stop.

Sunday, 2 November 2014

Testing your android app on your phone

This is something straight from the source(Google's android resources) and very important, How to test your android app on your phone?

Enable USB debugging on your device.
On most devices running Android 3.2 or older, you can find the option under Settings > Applications > Development.
On Android 4.0 and newer, it's in Settings > Developer options.
Note: On Android 4.2 and newer, Developer options is hidden by default. To make it available, go to Settings > About phone and tap Build number seven times. Return to the previous screen to find Developer options.

Friday, 31 October 2014

Download Naruto

So, the first release is out https://github.com/light94/naruto

The script will download the latest released naruto episode from narutospot.net. You must have beautifulsoup installed. just add the last seen episode number in naruto.txt and run the script.

Thursday, 30 October 2014

Added Utilities to ubuntu

This post will be a wiki for things that you can add to Ubuntu to make it more easier to use.

1)The good Open in Command Prompt utility from Windows.

Often , there are instances when you are searching for some file using your file manager(genearlly Nautilus) and then you need to do some operation on that file. Now imagine that the directory you are in has this path root/Downloads/Chrome/A1/B1/.... (as long as you can imagine). Now would you like to do so many cd commands or a single cd with a very long directory path. Here's something that is much better  than it.
You can add the open in terminal utility for linux using the following steps :

sudo apt-get install nautilus-open-terminal
Exit the terminal and now right click on any directory you are in. You will find an Open in Terminal  option there. 

Tuesday, 28 October 2014

MRO in Python

In Python, calling the super class' __init__ is optional.
However, if you override the __init__ of a class, you must explicitly call the parent class' __init__ with the current parameters.

This is done by :

super( nameofclass, self ).__init__() 



Python however supports Multiple Inheritance ,i.e a class can have more than one super class.

The MRO(Method Resolution Order - the order in which base classes are searched when looking for a method) is as follows .


1)Depth First Left to Right.
2)If any class is repeated in this search, all but the last occurrence of the class will be deleted.
3)Reject any class that has an inconsistent ordering of base classes.

Thursday, 23 October 2014

Cleaning the Python Terminal

This article is about the different methods that can be used to clear the python terminal

The first method is like this,

Ubuntu :
 import os
 os.system('clear')    


Windows:
import os
os.system('cls')




Second Method:

print chr(27) + "[2J" //python2

print(chr(27) + "[2J")  //python3

Note that this method is only for ANSI Terminals.

27 is the decimal representation of the escape key. So ESC[2J is recognized as ANSI clear command.

Third Method

just keep printing new lines like

print '\n' * 80

Wednesday, 22 October 2014

Getting scrapy to work on ubuntu

This post is for those who are trying to install scrapy on their ubuntu machines.

In order to install scrapy, the following components have to be installed
1)python 2.7
2)pip and setuptools
3)lxml
4)OpenSSl

When you install ubuntu, it also installs python 2.7.X,lxml and OpenSSL along with it (i've tested it atleast on ubuntu 12.04 to 14.04). In certain cases pip is not preinstalled , so you can install it like this:
sudo apt-get install python-pip

One thing that the docs do not talk about is installing the python development package. Trying to install scrapy leads to errors. You must install it prior to installing scrapy by:
sudo apt-get install python-dev


After this you just need to do a
 sudo pip install scrapy

One important point that I would like to mention is that people who connect to Internet by a proxy can still face some problem installing python packages.This problem has an answer here .
TL;DR append the E flag to sudo like sudo -E install scrapy. This makes use of the proxy settings to connect to the package library.

Saturday, 20 September 2014

Accessing dictionary values from a list of dictionaries in django... Wait . What?

This post deals with a question that has already been answered on stackoverflow here . I had to google  a lot, trying to phrase the exact words that would lead to this result. So, I am explaining the answer here as well( apart from the link provided) for myself and the others who find this blog before facing that problem (or it's solution).


I was in a situation in which i had to display the details of some users on the admin page of my django app. The list of users can be obtained in 1 line :

                     users.objects.filter(#myfiltercriteria)

The queryset returned can be transformed to a list of dictionaries with :

                    users.objects.filter(#myfiltercriteria).values()


Now the problem was how to retrieve these values. The simpler way is to retrieving the required values in the view itself and compile it in a list and then use the built in template tag of django for list to access the elements. 

Okay, I maybe wrong depending on the use case. (I was wrong in my case, as I had to retrieve the SAME set of values for all the users.) 

The better and efficient way is to make custom template tags. For the sake of better explanation, let us access the name attribute of the users dictionary passed to a template. 

   So, the custom filter tag is (credits :culebron )
                 
                    @register.filter
                    def get_item(dictionary, key):
                    return dictionary.get(key)

So, inside your template, you would access the names like this :

                   for user in users:
                       {{ user|get_item:item.name }}

This is extremely useful and neat but is not offered by django by default.

For a reference to see how custom template tags are made see this


Thursday, 4 September 2014

Always check the error log

5 hours trying to debug a very simple problem.. I've learned my lesson.

Always check the error log

I cannot emphasize this point more, it can save you lots of time. That's why it is there!

So, I made a web app in Codeigniter framework. All the development took place on my windows computer. Everything went perfect, web app fully functional in on localhost. Then i decided to launch it, on a Linux server (Ubuntu to be precise). I configured the database settings and started the server. The web app uses a login on its first page itself . It displayed the form correctly. I entered the login id and password only to be met b a blank page .I have been greeted many a times by 404 pages, but this was a new guest. A frequent user of web developer console, I quickly found out that it was a 500 Internal Server Problem. After a google search i found that it happens because of some programming  issue on the page/site. This was the first time I thought google was wrog, and this thougt was not baseless.. the same app was running perfectly on my PC. So , i searched here and there and everywhere (:P) . Tried everything , no perfect solution.

Then i posted this question on Stackoverflow here . What i need was just 1 comment (thanks @Prix) .
My error log just threw the syntax error on my face. I was bewildered . Why?
Again google. I found that the code i had written took help of a feature that had been introduced in PHP 5.4 while my server was running PHP 5.2 . The feature is called  function array dereferencing. It means accessing the array values at a particular index, the array being returned by a function. It meant that i had to use a temporary variable to store the returned value of the array ad then access array index from it .

Obviously the other solution was to use PHP 5.4 But to my dismay, the truth was that PHP 5.4 repos are not available for ubuntu 12.04. So i followed the instructions  here . However it did not work and i am still using php 5.2, but without issues.

Friday, 22 August 2014

So, the delay has a reason! Windows password check

A large portion of the world population use Microsoft Windows  as their Operating System. To all those people, I ask..

Have you ever wondered why it takes a long time to display a error message when you enter a wrong login password whereas if you password is correct, you login in a reasonably smaller amount of time ?
         Most people think that it is their system fault , a slow processor( as if how big data crunching is going on :P).

The reason is so simple and intelligent .
The 2 major concepts are Password caching  and preventing Brute Force Attacks
 It is better explained in the microsoft blog . Go ahead and satisfy yourselves .:)

Tuesday, 19 August 2014

Facebook Bug? Maybe... Maybe Not

Recently, I was fiddling around with things and found something which according to me seems a bug, but is not according to the Facebook .

So what I found that, the visibility/ privacy settings as in this picture can be overruled.
fb privacy


I have tried reporting this to the Facebook team and according to them this is no security bug, so I can safely post my report over here.







So, I was using the developer tools provided in Google Chrome.  I asked my friend to change privacy setting of his profile picture such that all except me could see his image. Note: while a profile picture is visible to all when someone visits a person's profile, they only see the cropped portion of the profile picture. Only the people granted access can click on it to see the complete picture. Of course, this is what facebook claims or maybe we have deduced it incorrectly.

       I used the DOM inspector to find the unique url of the profile picture as in the picture below


So, the unique url is the portion that i have scribbled with blank (:P) .

I copied this url. Now, I went to my own account, and clicked on the profile picture. Again using the DOM inspector, I found the common url. It turned out to be this (the part which is not scribbled):
I then replaced the scribbled part of this url with the scribbled part of the previous image.

 And the result was, I could see the profile picture of my friend, not only the cropped part, the whole picture. Also going into incognito mode, I could still see it.


So according to me, this is incorrect and as mentioned above I reported this to facebook twice. But it seems I am wrong.. I'd like to hear your comments on this :)

EDIT:
Facebook offers an API to see the images as in here . However, the point to be noted there is :

Permissions

  • Any valid access token for any photo with public privacy settings.
  • For any photos uploaded by someone, and any photos in which they have been tagged - A user access token for that person with user_photos permission.

Wednesday, 13 August 2014

Using BigDump

 I recently was in a situation when I had to push code for a new website to my server. Simple enough, except that I could not access the phpmyadmin by remote or local method. (Local method means accessing phpmyadmin installed on the hosting server itself, whereas remote method means using phpmyadmin on your own system ,for eg. i had it installed as part of wamp, to access the mysql database of remote server.)

There were large queries to be made and i didn't want to type it all on the console. It was at this time that i came across bigdump script . What it actually does , is that it excutes your mysql dump( usually generated by an export of your database). That is, it imports a database to the server.It is a very good script made specially for problems like this .

The steps involved are as follows,

  1. Create a folder(let its name be dump).
  2. Transfer the bigdump.php and the dumped sql file in that folder.
  3. On your browser, enter the path to the script.
  4. The script will ask your permission to run the queries in database.

The script itself is customizable. Few things to be kept it mind,
  1. You need to change database login credentials in the script.
  2. If the dump contains CREATE DATABASE  query, we need to uncomment a line in the script.

With these things in mind, bigdump turns out to be a very useful tool

Wednesday, 9 July 2014

The easy way to deal with dates in PHP

One of the good things that PHP versions 5.1 and above come with is the utility-


 date_default_timezone_set('timezone');

What it means is that whenever you will use the date() function, it will take as reference the "timezone" you specified.

For a list of timezones , click here.

Once this is done, call date() with the necessary formatting parameters, for example, the format accepted by MySQL i.e, yyyy-mm-dd .You get it by,
date('Y-m-d')


If you want to know what that means,click here .

With this procedure you are ready to use dates easily !!

Sunday, 25 May 2014

PDO -> The PHP Database Object

The PDO represents a connection between PHP and the Database Server.

Given the word object  in the title, we know that to use PDO, we need to instantiate it first.

From the PHP manual , I got :

public PDO::__construct ( string $dsn [, string $username [, string $password [, array $options ]]] )

or you can use

$pdo = new PDO(string $dsn [, string $username [, string $password [, array $options ]]) ;

The PHP manual says that DSN  (Data Source Name) contains the information required to connect to a database.

In general, a DSN consists of the PDO driver name, followed by a colon, followed by the PDO driver-specific connection syntax

I use mysql , so in my case, it will be 

"mysql:host=$databasehost;dbname=$databasename"
 
The username and password are clear.
 
Options :A key=>value array of driver-specific connection options.
       Ref :https://php.net/manual/en/ref.pdo-mysql.php

Friday, 25 April 2014

Shortcuts, Shortcuts Everywhere


In this post , I shall tell how to create a shortcut for Full Shutdown in Windows 8 and a shortcut to change proxy server quickly.


First, the Full Shutdown:

1. Right Click on Desktop and create a new shortcut.
2. Type the location of the item as shutdown.exe -s  (just like that) .
3. Give it an appropriate name.
           Done!!!

Second, the Proxy

1.Right Click on Desktop and create a new shortcut.
2. Type the location of the item as control Inetcpl.cpl,Connections,4  
3. Give it a name.
         Done!!!


The above 2 shortcuts will definitely save (if not much) your daily time.

Friday, 14 March 2014

Github Newbie or unable to operate Github for Windows? This might be for you.

I started using GitHub almost a month back. Initially, I used to create an online repository(repo in short) and create files there only. But How long could it go??

   So, I installed GitHub for Windows . The website boasts of it as "The easiest way to use GitHub on Windows" , this is indeed true. But Proxy users (like me :P) ,you might face some difficulty in the beginning,you might not be able to clone or push  your repositories from desktop. I had a similar problem and I contacted Steve Ward ,a GitHubber  and thanks to him, I can use GitHub even while using a proxy.Read on to find how you too can solve your problem.

  1. First of all, if this blog was not there, you would use this link to communicate your problem. But if you have the same problem I had , why go on to repeat the same task. Read the points below.
  2. My log file read :
GitHub.IO.ProcessException: Cloning into 'ERP'...
fatal: unable to access 'https://github.com/Acyut123/ERP.git/': Failed connect to github.com:443; No error

If you are behind a proxy or a firewall, Port 443 is blocked and this article might help you in order to open certain ports.


 If you didn't get that, no need to worry.(Note that GitHub doesn't officially support runnint the application behind a proxy ,for the release i am using.)

3.Open Git Shell and type in the following:

git config --global http.proxy http://proxyuser:proxypwd@proxy.server.com:8080

Replace `proxyuser` with your proxy username and `proxypwd` with your proxy password. Also, `proxy.server.com:8080` needs to point to the URL of your proxy server.

 I too followed the same and voila!!, immediately all the functions of GitHub became accessible from Windows.

Thank you GitHub for the awesome user service !!!

Edit

You might temporarily not require a proxy server for connecting to the internet, the commmands mentioned above will , in most cases prevent you to connect GitHub for windows to connect to the server.
Run the following command:
git config --global --unset http.proxy

and your network settings will be restored to the normal!!