Tuesday, March 6, 2018

How to Install Windows 7 or 10 on Mac

Written by Pranshu Bajpai |  | LinkedIn

I like dual booting my systems since I switch between Windows, Mac OS X and Linux fairly regularly. On a PC, I would usually dual boot Windows and Linux, and on a Mac, I usually dual boot Mac OS X and Windows. Installing Windows on a Mac may or may or may not go smooth depending on how old the Mac is. I've installed Windows 7 on a 2011 iMac without a problem so this procedure should work at least as far back as 2011 Macs in my personal experience. But the process itself can get convoluted and there are some pitfalls to be avoided so I decided to document it here.

Bootcamp: To Use or Not to Use?

 

Definitely use Bootcamp. Bootcamp is Apple's native utility for installing Windows on a Mac. Installing Windows and the relevant boot configuring without Bootcamp can be a painful and unnecessarily long process and using Bootcamp is recommended. For example, when I tried to boot my Macbook Pro off of a USB drive containing Windows installation, it got to the installation screen but then would not let me install Windows on the disk as shown below. Note that the disk was MS-FAT. It did not install even when I deleted the partition so I had plenty of unallocated space. I could not even format the unallocated space into NTFS.


I tried to fix this from command line but it was a lot of hassle and should be avoided:



Meanwhile, I messed up the original Mac OS X installation while trying to install Windows and had to reformat the entire drive and reinstall the Mac OS:


 I had to reformat the hard drive because without the reformatting, it would not even show up as an option during Mac installation.


 So I had to format the drive as shown before so I could reinstall the Mac OS X.

All this to show that installing Windows on a Mac without Bootcamp is an unnecessarily cumbersome activity that should be avoided. 

Using Bootcamp

 

Apple did a great job of documenting the procedure here. Follow Apple's instructions there to use Bootcamp. Note that you should have a Windows ISO and a USB drive ready.




Make sure you also download the correct version of drivers needed for your particular Mac. My 2012 Macbook Pro needed version 5.1.5621. If you are installing Windows 7 like me, quit at the screen shown below to manually download the archive containing the right drivers.


Unzip the archive and place it at the root location of your thumb drive.


Next, open Bootcamp again and this time check the 'install Windows 7 or later version'.

It now asks you to resize your partition to make room for Windows. 

It is here that I encountered a strange error: "Your disk could not be partitioned. An error occurred while partitioning the disk. Please run Disk Utility to check and fix the error."


So of course I followed the advise and ran Disk Utility 'First Aid' on my drive to see if it is failing. Everything seemed OK. I thought maybe I will have better luck fixing drive errors if it wasn't mounted so I rebooted into Single User mode (Command+S) and ran '/sbin/fsck -fy' to fix errors:


No luck though. The same error persisted while trying to partition the drive. Long story short, it was 'File Vault' on Mac that was causing the error in partitioning. File Vault is Apple's disk encryption utility and if it is functional, it will protect the drive against manipulating by Bootcamp. So turn off File Vault. This, unfortunately, can take a bit of time as disk encryption and decryption can be slow.



After turning off File Vault, I was able to partition the drive and the system rebooted into the thumb drive to install Windows:


I had Windows running on the Mac in a short while but the Bootcamp drivers still needed to be installed. Otherwise, there is no network connectivity, no display drivers, no sound etc.

So we use the drivers we downloaded earlier into the root folder of the USB stick and let them install:


I still did not have the display right after all the drivers were installed. Turns out that Windows needed to update itself before that problem was fixed. So I updated Windows to the latest definitions and the display drivers for the resident NVIDIA GT 650M card on the 2012 Macbook Pro showed up right away under 'Device Manager'.

Thus ends the dual boot saga.

Thursday, March 1, 2018

[Fix] TexStudio will not compile bibtex files

Written by Pranshu Bajpai |  | LinkedIn

If have been running into this issue a lot lately as I write more and more research papers in LaTeX. I have fixed it in the past but it has come to the point where I need to make a journal entry here so I can come back and look at the procedure. I hope it helps other potential readers as well.

Problem: I have a .bib file placed in the same directory as my .tex file and it compiles bibliography just fine. However, at a certain point it will just stop compiling bibliography and either I won't be able to find one particular citation or all of them. In the document, citations appear as [?].

Things to try: Go to 'Tools' and 'Clean Auxiliary Files' then compile the document again. It may or may not work.

Solution: I discovered that cleaning auxiliary files, then going to 'Tools' and clicking Bibliography (F8) to generate bibliography again and then compiling the document again (F6) works and brings all your citations back into the document.
Written by Pranshu Bajpai |  | LinkedIn

Wednesday, November 8, 2017

Penetration Testing Video Series by AmIRootYet [Pranshu]

Written by Pranshu Bajpai |  | LinkedIn

As of November 2017, I have started posting a series of videos detailing and demonstrating several penetration testing concepts on a YouTube channel here:

https://www.youtube.com/channel/UC_MuHQPbf3EatJc7M6nDTlQ

The purpose of this channel is to foster a deeper understanding of security concepts and, more importantly, how hackers operate. To beat the enemy, it is crucial to comprehend how they operate. Knowing the adversary is our best defense.

The format of the videos will be a demonstration of a security concept on Kali Linux, that is, a practical lab scenario. I will explain as much as I can in the short videos but at this point I will assume that visitors will do background reading on the theory behind my demonstrations on their own. In future, if time permits, I might include some theory videos as well.

Please subscribe, like, and comment on the channel to show your support. Pursuing a doctoral degree in computer science keeps me very busy and this support encourages me to keep posting regularly despite my busy schedule.

Thank you!

Friday, January 6, 2017

How to Generate GPG Public / Private Key Pair (RSA / DSA / ElGamal)?

Written by Pranshu Bajpai |  | LinkedIn


This post is meant to simplify the procedure for generating GNUPG keys on a Linux machine. In the example below, I am generating a 4096 bit RSA public private key pair.

Step 1. Initiate the generation process

#gpg --gen-key
 This initiates the generation process. You have to answer some questions to configure the needed key size and your details. For example, select from several kinds of keys available. If you do not know which one you need, the default 1 will do fine.

I usually select my key size to be 4096 bits which is quite strong. You can do the same or select a lower bit size. Next, select an expiration date for your key -- I chose 'never'.



Step 2. Generate entropy


The program needs entropy, also known as randomness, to generate the keys. For this you need to type on the keyboard or move the mouse pointer or use disk. However, you may still have to wait a while before the keys are generated.


For this reason, I use rng-tools to generate randomness. First install 'rng-tools' by typing:
#apt-get install rng-tools
Run the tool: 
#rngd -r /dev/urandom
The process of finding entropy should now conclude faster. On my system, it was almost instantaneous.



Step 3. Check ~/.gnupg to locate the keys

Once the keys are generated, they are usually stored in ~/.gnupg, a hidden gnupg directory in the home folder. You can check the location of keys by typing:

#gpg -k
The key fingerprint can be obtained by:
   #gpg --fingerprint

Step 4. Export the public key to be shared with others


For others to be able to communicate with you, you need to share you public key. So move to the ~/.gnupg folder and export the public key:

#gpg --armor --export email@host.com > pub_key.asc
'ls' should now show you a new file in the folder called 'pub_key.asc'. 'cat' will show you that this is the public key file.



Important !

Needless to say, do not share your private key with anyone.

Wednesday, October 12, 2016

[MACchanger] Spoofed MAC address changes back to original permanent MAC before connecting to WiFi

Written by Pranshu Bajpai |  | LinkedIn


So I needed to spoof my machine's MAC / hardware address as part of a routine penetration test. One problem that I keep facing when using Kali Linux utility, 'macchanger', to do this is that the MAC does successfully spoof but changes back to the original MAC address right before I attempt to connect to a wireless access point. Good thing that years of working in security / hacking has made me paranoid enough to constantly check if the new spoofed MAC address is being used. 'ifconfig' in terminal tells me that it is not. Instead, right before connecting to the wireless access point my machine went back to it's original MAC address on 'wlan0'. Not good.


Solution to retain the spoofed MAC address on wlan0 in Kali Linux:


I've discovered that these 3 commands will help:

ifconfig wlan0 down
ifconfig wlan0 hw ether 00:11:22:33:44:55
ifconfig wlan0 up

Additionally, you may have to turn your WiFi off / on using the graphic panel in the top right. But now, you can connect to the wireless access point and then 'ifconfig wlan0' should reveal that your machine is using the spoofed MAC address: 00:11:22:33:44:55 as shown in image below.


Friday, November 13, 2015

'apt-add-repository' command not found Debian / Ubuntu [Solution]

Written by Pranshu Bajpai |  | LinkedIn

You might have encountered certain non-standard packages that have no installation candidates in your current repositories. In such cases, you can try to add a new repository. However, you might have encountered an error that says: 'apt-add repository' command not found. The system currently has no path to the binary 'apt-add-repository' which is why it says it cannot find that command.

Here's the fix


Execute the following commands in your terminal:

$wget http://blog.anantshri.info/content/uploads/2010/09/add-apt-repository.sh.txt
(thanks to the author this script!)
$mv add-apt-repository.sh.txt /usr/sbin/add-apt-repository

$chmod o+x /usr/sbin/add-apt-repository

$chown root:root /usr/sbin/add-apt-repository
If you are not using a 'root' account, then add a 'sudo' infront of each of these commands before executing them.

Now, trying adding the new repository again. For example:

$add-apt-repository ppa:webupd8team/sublime-text-2
$apt-get update

(the repository you are trying to add might be different in your case)


You should now be able to add new repositories to your system and install non-standard packages.



Please let me know in the comments below if you come across any issues.

Thursday, October 15, 2015

How to get WiFi to work after installing Ubuntu or Lubuntu on Macbook?

Written by Pranshu Bajpai |  | LinkedIn

Problem: No WiFi connectivity in Lubuntu after installing it on a Macbook Air.


I recently installed Lubuntu to breath life into my old Macbook Air 1,1 (2008). The installation went smooth and the operating system is giving me no problems so far. The only thing that does not work right off the bat is WiFi -- in that I have no WiFi drivers or the icon. However, the icon is not a problem, getting the right drivers is.

After sifting through a lot of content on the Internet, I was able to get it working on my Mac Air 2008 and another Mac Air late 2010 3,2 model. Both of these have slightly different WiFi cards -- although both are Broadcom -- and so require slightly different procedures. But these steps should work for most people out there.

How to unable WiFi in Lubuntu on a Macbook?


Ubuntu, or Lubuntu, seems to be missing drivers for the Broadcom network hardware installed on a Macbook -- which leads to the problem of no WiFi. You need to get the drivers appropriate for your device.

With Internet connection


WiFi is obviously not working on this device yet, but if you have any other means of obtaining connectivity on this Macbook, then that simplies things a lot. Just type the following commands:

#sudo apt-get update
#sudo apt-get purge bcmwl-kernel-source
#sudo apt-get install firmware-b43-installer

The 'purge' part is to get rid of 'bcmwl-kernel-source' if you have been trying versions of that driver. It may or may not work for some systems. I tested on 2 different Macbook Air's (2008 and 2010) and both reacted different to it. I found 'firmware-b43-installer' to be more reliable.

Since you have connectivity, the apt-get command will simply load the best-suited version of the driver on your machine, and after a reboot, you should be able to get WiFi working. I wasn't so lucky though...

Without Internet connection


Find out exactly what WiFi hardware you have on your Macbook by using the following command:

#lspci -nn | grep Network

That will tell you the details you need to know. For instance, in my case, I received the following output:

01:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01)

Here, 'BCM43224' is the important part. Look around for the best suited version of the following drivers for your card.

Now, you can go ahead and obtain b43_updated, unzip it, and copy it's contents into /lib/firmware/:

#sudo cp b43/ /lib/firmware
#sudo modprobe -rv b43
#sudo modprobe -v b43

Your /lib/firmware/ folder should now hold the necessary files:



Now reboot, and you should have the WiFi working.

WiFi network connectivity icon missing from panel

Do you still not see a difference? Maybe you're looking for the WiFi connection icon on the taskbar panel and it's just not there. In that case, 'nm-applet' is missing from your environment. You can fix this in the following manner:

Preferences --> Default applications for Lxsessions --> Autostart --> Manual Autostart -> type: nm-applet --> click: 'Add'

Logout and log back in. The WiFi applet should be there now.

Tuesday, March 10, 2015

/var/log Disk Space Issues | Ubuntu, Kali, Debian Linux | /var/log Fills Up Fast

Written by Pranshu Bajpai |  | LinkedIn

Recently, I started noticing that my computer keeps running out of space for no reason at all. I mean I didn't download any large files and my root drive should not be having any space issues, and yet my computer kept tellling me that I had '0' bytes available or free on my /root/ drive. As I found it hard to believe, I invoked the 'df' command (for disk space usage):
#df

So clearly, 100% of the disk partition is in use, and '0' is available to me. Again, I tried to see if the system simply ran out of 'inodes' to assign to new files; this could happen if there are a lot of small files of '0' bytes or so on your machine.
#df -i

Only 11% inodes were in use, so this was clearly not a problem of running out of inodes. This was completely baffling. First thing to do was to locate the cause of the problem. Computers never lie. If the machine tells me that I am running out of space on the root drive then there must be some files that I do not know about, mostly likely these are some 'system' files created during routine operations.

To locate the cause of the problem, I executed the following command to find all files of size greater than ~2GB:
# find / -size +2000M

Clearly, the folder '/var/log' needs my attention. Seems like some kernel log files are humongous in size and have not been 'rotated' (explained later). So, I listed the contents of this directory arranged in order of decreasing size:
#ls -s -S

That one log file 'messages.1' was 12 GB in size and the next two were 5.5 GB. So this is what has been eating up my space. First thing I did, was run 'logrotate':
#/etc/cron.daily/logrotate 
It ran for a while as it rotated the logs. logrotate is meant to automate the task of administrating log files on systems that generate a heavy amount of logs. It is responsible for compressing, rotating, and delivering log files. Read more about it here.

What I hoped by running logrotate was that it would rotate and compress the old log files so I can quickly remove those from my system. Why didn't I just delete that '/var/log' directory directly? Because that would break things. '/var/log' is needed by the system and the system expects to see it. Deleting it is a bad idea. So, I needed to ensure that I don't delete anything of significance.

After a while, logrotate completed execution and I was able to see some '.gz' compresses files in this directory. I quickly removed (or deleted) these.

Still, there were two files of around 5 GB: messages.1 and kern.log.1.  Since these had already been rotated, I figured it would be safe to remove these as well. But instead of doing an 'rm' to remove them, I decided to just empty them (in case they were being used somewhere).
#> messages.1
#> kern.log.1

The size of both of these was reduced to '0' bytes. Great! Freed up a lot of disk space this way and nothing 'broken' in the process.

How did the log files become so large over such a small time period?


This is killing me. Normally, log files should not reach this kind of sizes if logrotate is doing its job properly or if everything is running right. I am still interested in knowing how did the log files got so huge in the first place. It is probably some service, application or process creating a lot of errors maybe? Maybe logrotate is not able to execute under 'cron' jobs? I don't know. Before 'emptying' these log files I did take a look inside them to find repetitive patterns. But then I quickly gave up on reading 5 GB files as I was short on time.

Since this is my personal laptop that I shut down at night, as opposed to a server that is up all the time, I have installed 'anacron' and will set 'logrotate' to run under 'anacron' instead of cron. I did this since I have my suspicions that cron is not executing logrotate daily. We will see what the results are.

I will update this post when I have discovered the root cause of this problem.

Thursday, February 5, 2015

Multiple Screens in (Kali) Linux | How To

Written by Pranshu Bajpai |  | LinkedIn

I have felt the need for multiple screens several times simply because of the many tabs and terminal windows I keep open on my box. Hence, to avoid constantly switching between these, I decided to bring in multiple screens . You might have felt the same--especially if you work on multiple applications simultaneously. Some people use these multiple screens while playing games as well.

Before I brought in new screens, I wanted to get a 'feel' of using them, and decide whether this is something I would be comfortable with while working. Fortunately, I had an old LG 17'' CRT monitor lying around which I used for testing this set up of multiple screens. Here, the operating system I am using is Kali Linux (Debian 7 wheezy) but the process is fairly straightforward and would work for any Linux (or Windows) box.

How to set up multiple screen on (Kali) Linux

Firstly, you need to make the hardware connection, that is, connect the other screen's display cable to your machine. In my case, I connected the old CRT monitor's VGA cable to my HP laptop.

You need to locate the 'Display' panel to set up the initial configuration. This should not be hard to do. On a Debian or Kali Linux box, this would be under 'Applications' --> 'System Tools' --> 'Preferences' --> 'System Settings' --> 'Displays'



The location of 'Displays' could vary according to your Linux distro, however, again, it should not be hard to locate. Once inside, you will see that your OS has detected the two displays. Uncheck 'Mirror displays'. By default, your laptop's screen is the primary display and would be on the left. You can drag and change this so that the laptop's display is on the right--as I have done here.


How to set the primary display screen

By default, your laptop's screen is your primary display. This means that the top panel, containing 'Applications' and 'Places', and the bottom panel, tracking open windows and tabs, would be available on the laptop's screen only. I wanted to change this so that my CRT monitor's screen was the primary screen. To do so, I edited the monitors.xml file in Linux.

Locate 'monitors.xml' in '/home/.config/monitors.xml' or '/root/.config/monitors.xml'. Now, edit it in a text editor so that you modify the line containing '<primary>yes/no</primary>'.


In my case, I have modified the xml file so that the part corresponding to my laptop's screen says  '<primary>no</primary>', and the part corresponding to the CRT monitor says '<primary>yes</primary>'.

Now, the CRT monitor is the primary screen and the 'Applications', 'Places' etc would show up here. After all the set up, this is what it looks like on my box:


Note that this is the extended display corresponding to both the screens, that is, half of this shows up on one screen and half on the other. This is a picture of my set up:


Note: The Guake terminal (yellow font) has been configured to show up on both the screens. For this, I edited the '/usr/bin/guake' and changed the width from '100' to '200'.

So far, I am pleased with this multiple screen set up as it offers me a lot more work space, but it will take a little getting used to.

Friday, January 30, 2015

USD to INR Exchange Rate Calculator (Xoom, PayPal) Script in Python

Written by Pranshu Bajpai |  | LinkedIn

I frequently transfer money to India. For this reason, I find myself calculating amounts pertaining to USD to INR conversions using major remit websites such as the following:
  • Xoom
  • PayPal
  • Remitly
  • RIA Money Transfer
  • MoneyDart - Money2Anywhere
  • Trans-Fast
  • USForex Money Transfer
  • IndusInd Bank - Indus Fast Remit
  • State Bank of India
  • ICICI Money2India
  • Axis Bank - AxisRemit
  • Western Union

Since almost all of these websites offer more or less similar services, I usually choose the one that is offering me the best exchange rate. Having said that, I was quickly annoyed by having to visit each of these websites to compare them and determine the one that is offering the best USD to INR exchange rate on the present day. For this reason, I decided to write a Python web scrapping script that would go online, locate the exchange rates germane to all of these major websites and then show them to me for comparison.

Usage: python usdtoinr.py

Download: Github: https://github.com/lifeofpentester/usdtoinr




By default, it shows you the exchange rate grabbed from xoom (without any switch). Or, you can use '-x' or '--xoom' switch to do the same thing.

By using the '-a' or '--all' switch, you can view the current exchange rates corresponding to all major remit websites.



Initially, my 'usdtoinr' python script only displayed the current exchange rate from the major websites, but then I realized that it would be better if they could convert an amount in USD (US dollars) to an amount in INR (Indian Rupees). Accordingly, I added this functionality in the script. You can use the '-c' or '--convert' switch for this purpose.



As I started using PayPal, I added another function in my script that would show me an amount in USD (US dollars) converted to an amount in INR (Indian Rupees) and then deduct PayPal's fees of 4 percent from this amount to show me the money that someone would actually receive in India. To invoke this function, use the '-p' or '--paypal' switch.


Of course, '-h' or '--help' is to see the usage information.

That's all the functionality that is coded in the script at the moment since that's all I needed. With time, I may add more functions should I need them. As with all web scrapping scripts, the functionality of the script depends on the websites from where it is capturing the information pertaining to the exchange rates. If those websites change with time, it might result in the script breaking down. If that is the case, or if you want some other functionality added, feel free to modify the code.

Note: argparse is a nice library that you can use to allow for command line arguments or switches in your scripts.

Note: The ASCII text banner in the script has been generated with a utility called 'figlet'

Source Code:


If you would like to read the code, here it is:


#!/usr/bin/python

import requests
from bs4 import BeautifulSoup
import argparse
import re
import time

def _xoom():
 r = requests.get('https://www.xoom.com/india/send-money')
 data = r.text

 soup = BeautifulSoup(data)

 for rate in soup.find_all('em'):
  return rate.text

def _all():
 r = requests.get('http://www.compareremit.com')
 print "[+] Requested information!"
 data = r.text
 print "[+] Grabbed all exchange rates online!"
 soup = BeautifulSoup(data)
 for rate in soup.find_all('div',{"class":"c_logo_box"}):
      print rate.a.img['alt'] 
      print rate.span.text

def _rate_calc():
 ratetext = _xoom()
 print "[+] Requested exchange rate from Xoom!"
 found = re.search("(?<=\=)(.*?)(?=I)", ratetext)
 print "[+] Located today's exhange rate!" 
 rate = float(found.group())
 print "[+] Converting USD to INR now..."
 amount = args.convert * rate
 return amount

def _paypal():
 ratetext = _xoom()
 print "[+] Requested exchange rate from Xoom!"
 found = re.search("(?<=\=)(.*?)(?=I)", ratetext)
 print "[+] Located today's exchange rate!" 
 rate = float(found.group())
 print "[+] Converting USD to INR now..."
 print "[+] Calculating amount left after PayPal's 4 percent fee..."
 amount = 0.96*(args.paypal*rate)
 return amount
 

parser = argparse.ArgumentParser(description="Script for USD to INR calculations")
parser.add_argument('-x', '--xoom', help='exchange rate from xoom.com', action='store_true')
parser.add_argument('-a', '--all', help='exchange rate from all major remit websites', action='store_true')
parser.add_argument('-c', '--convert', help='USD to INR conversion using current exchange rate', type=float)
parser.add_argument('-p', '--paypal', help='amount after deducting PayPal\'s 4 percent fees', type=float)
args = parser.parse_args()




print """               _ _        _           
 _   _ ___  __| | |_ ___ (_)_ __  _ __ 
| | | / __|/ _` | __/ _ \| | '_ \| '__|
| |_| \__ \ (_| | || (_) | | | | | |   
 \__,_|___/\__,_|\__\___/|_|_| |_|_|   

                          --by Pranshu
"""

_time = time.asctime(time.localtime(time.time()))

print "[i] " + _time


if args.xoom:
 rate = _xoom()
 print "[i] Exchange Rate: " + rate
elif args.all:
 _all()
elif args.convert:
 amount = _rate_calc()
 print "\n[i]Amount in Rupees according to the exchange rate today: %f" %amount
elif args.paypal:
 amount = _paypal()
 print "\n[i]Amount in Rupees after deduction of Paypal's fees: %f" %amount
else:
 rate = _xoom()
 print "[i] Exchange Rate: " + rate
 #parser.print_help()

Thursday, January 8, 2015

PhD Comics Downloader | Python Script to Download Piled Higher and Deeper Comics

Written by Pranshu Bajpai |  | LinkedIn

PhD Comics, by Jorge Cham, provide a funny (but accurate) glimpse into the life of a graduate student. Being a graduate student myself, I have always enjoyed reading this comic.

At some point, I decided to read a bunch of these comic during my travel when I wouldn't be able to access the Internet. Since this is an online comic, this was a problem. I wrote a small python scraping script that is able to visit different pages on the phdcomics website, locate the comic GIFs, and download them on local disk for reading later on.

Usage: python downloadphdcomics.py

Download: Githubhttps://github.com/lifeofpentester/phdcomics


Note that the start comic number and end comic number is hard coded in the script as '1' and '1699' respectively. You can modify the script in any text editor to download a different range of comics.

You can store all of these GIFs in a ZIP file and change the extension from ZIP to CBR. Then, you can use any CBR reader to read these comics.


In case you are interested in reading the code, here it is:


#!/usr/bin/python

"""The PhD Comics Downloader"""
"""
This code fetches PhD comics from www.phdcomics.com
and saves to '/root/'

Written by: Pranshu
bajpai [dot] pranshu [at] gmail [dot] com

""" 


from bs4 import BeautifulSoup
from urllib import urlretrieve
import urllib2
import re

for i in range(1, 1699):

    url = "http://www.phdcomics.com/comics/archive.php?comicid=%d" %i 
    html = urllib2.urlopen(url)
    content = html.read()
    soup = BeautifulSoup(content)

    for image in soup.find_all('img', src=re.compile('http://www.phdcomics.com/comics/archive/' + 'phd.*gif$')):
        print "[+] Fetched Comic " + "%d" %i + ": " + image["src"]
    outfile = "/root/" + "%d" %i + ".gif"
    urlretrieve(image["src"], outfile)


Sunday, December 21, 2014

How to Use Truecrypt | Truecrypt Tutorial [Screenshots] | Kali Linux, BackTrack, BackBox, Windows

Written by Pranshu Bajpai |  | LinkedIn

Data protection is crucial. The importance of privacy--specially concerning sensitive documents--cannot be overstated, and if you’re here, you have already taken the first step towards securing it.

Truecrypt is one of the best encryption tools out there. It’s free and available for Windows and Linux. It comes pre-installed in Kali Linux and Backtrack. I first came across the tool when I was reading ‘Kingpin’ (The infamous hacker Max Butler was using it to encrypt data that could be used as evidence against him).

Here is how you can set up Truecrypt for use in Kali Linux (similar procedures will work in other Linux distros and Windows).

Goto Applications -> Accessories -> Truecrypt

Truecrypt main window opens up. As this is the first time we are using Truecrypt we need to set up a volume for our use.

Click ‘Create Volume’ and the Truecrypt volume creation wizard opens up:


Click on ‘create an encrypted file container’

This container will contain your encrypted files. The files can be of any type, as long as they lie in this container, they will be encrypted after ‘dismounting the volume’.

Now the next screen asks if you want to create a Standard or Hidden Volume. In case of hidden volume, no one would really know that it is there so they can’t ‘force’ you to provide its password.

For now we will just create a ‘Standard’ volume.



On the next screen you will asked for the ‘location’ of this volume. This can be any drive on your computer. This is where your container will lie. The container can be seen at this location but it won’t have any ‘extension’ and will have the name that you provide it during this set up.

Choose any ‘location’ on your computer for the container and carry on to the next step.

A password is now required for this volume. This is the ‘password’ which will be used to decrypt the volume while ‘mounting’ it. Needless to say, it should be strong as a weak password defeats the whole purpose of security/encryption.


Next click on ‘Format’ and the volume creation would begin. You will be shown a progress bar and it will take some time depending on how big your volume size is.



Once your ‘Formatting’ is completed. Your volume is ready to be used. You can place files in there (drag and drop works). Once done ‘Dismount’ this volume and exit Truecrypt.

When you want to access the encrypted files in the container, fire up Truecrypt and click on any ‘Slots’ on the main window.

Now goto ‘Mount’ and point to the location of the container which you selected during setting up the volume.

It will then prompt you for the password.


If you provide the correct password, you’ll see that the volume is mounted on the ‘Slot’ that you selected, if you double-click that ‘Slot’ a new explorer window would open where you can see your decrypted files and work with them. And you can add more files to the container if you want.

After you’re done, ‘Dismount’ the volume and exit Truecrypt.

Sunday, November 30, 2014

FOCA Metadata Analysis Tool

Written by Pranshu Bajpai |  | LinkedIn

Foca is an easy-to-use GUI Tool for Windows that automates the process of searching a website to grab documents and extract information. Foca also helps in structuring and storing the Metadata revealed. Here we explore the importance of Foca for Penetration Testers


Figure 1: Foca ‘New Project’ Window


Penetration Testers are well-versed in utilizing every bit of Information for constructing sophisticated attacks in later phases.  This information is collected in the ‘Reconnaissance’ or ‘Information gathering’ phase of the Penetration Test. A variety of tools help Penetration Testers in this phase. One such Tool is Foca.
Documents are commonly found on websites, created by internal users for a variety of purposes. Releasing such Public Documents is a common practice and no one thinks twice before doing so. However, these public documents contain important information like the ‘creator’ of the document, the ‘date’ it was written on, the ‘software’ used for creating the document etc.  To a Black Hat Hacker who is looking for compromising systems, such information may provide crucial information about the internal users and software deployed within the organization.

What is this ‘Metadata’ and Why would we be interested in it?
The one line definition of Metadata would be “A set of data that describes and gives information about other data”. So when a Document is created, its Metadata would be the name of the ‘User’ who created it, ‘Time’ when it was created, ‘Time’ it was last modified, the ‘folder path’ and so on. As Penetration Testers we are interested in metadata because we like to collect all possible information before proceeding with the attack. Abraham Lincoln said “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”. Metadata analysis is part of the Penetration Tester’s act of ‘sharpening the axe’. This Information would reveal the internal users, their emails, their software and much more.

Gathering Metadata
As Shown in Figure 1, Foca organizes various Projects, each relating to a particular domain. So if you’re frequently analyzing Metadata from several domains as a Pen Tester, it can be stored in an orderly fashion. Foca lets you crawl ‘Google’, ‘Bing’ and ‘Exalead’ looking for publicly listed documents (Figure 2).


Figure 2: Foca searching for documents online as well as detecting insecure methods
 You can discover the following type of documents:
DOC
DOCX
PPT
PPTX
XLS
XLSX
SWX
SXI
ODT
PPSX
PPS
SXC

Once the documents are listed, you have to explicitly ‘Download All’ (Figure 3).


Figure 3: Downloading Documents to a Local Drive
 Once you have the Documents in your local drive, you can ‘Extract All Metadata’ (Figure 4).

Figure 4: Extracting All Metadata from the downloaded documents
This Metadata will be stored under appropriate tabs in Foca. For Example, ‘Documents’ tab would hold the list of all the documents collected, further classified into ‘Doc’, ‘Docx’, ‘Pdf’ etc. After ‘Extracting Metadata’, you can see ‘numbers’ next to ‘Users’, ‘Folders’, ‘Software’, ‘Emails’ and ‘Passwords’ (Figure 5). These ‘Numbers’ depend on how much Metadata the documents have revealed. If the documents were a part of a database then you would important information about the database like ‘name of the database’, ‘the tables contained in it’, the ‘columns in the tables’ etc.


Figure 5: Foca showing the ‘numbers’ related to Metadata collected


Figure 6: Metadata reveals Software being used internally
Such Information can be employed during attacks. For Example, ‘Users’ can be profiled and corresponding names can be tried as ‘Usernames’ for login panels. Another Example would be that of finding out the exact software version being used internally and then trying to exploit a weakness in that software version, either over the network or by social engineering (Figure 6).
At the same time it employs ‘Fuzzing’ techniques to look for ‘Insecure Methods’ (Figure 2)
Clearly Information that should stay within the organization is leaving the organization without the administrators’ knowledge. This may prove to be a critical security flaw. It’s just a matter of ‘who’ understands the importance of this information and ‘how’ to misuse it.
So Can Foca Tell Us Something About the Network?
Yes and this is one of the best features in Foca. Based on the Metadata in the documents, Foca attempts to map the Network for you. This can be a huge bonus for Pen Testers. Understanding the Network is crucial, especially in Black Box Penetration Tests.

Figure 7: Network Mapping using Foca
As seen in Figure, a lot of Network information may be revealed by Foca. A skilled attacker can leverage this information to his advantage and cause a variety of security problems. For example ‘DNS Snoop’ in Foca can be used to determine what websites the internal users are visiting and at what time.
So is Foca Perfect for Metadata Analysis?
There are other Metadata Analyzers out there like Metagoofil, Cewl and Libextractor. However, Foca seems to stand out. It is mainly because it has a very easy to use interface and the nice way in which it organizes Information. Pen Testers work every day on a variety of command line tools and while they enjoy the smoothness of working in ‘shell’, their appreciation is not lost for a stable GUI tool that automates things for them. Foca does the same.
However, Foca has not been released for ‘Linux’ and works under ‘Windows only’, which may be a drawback for Penetration Testers because many of us prefer working on Linux. The creators of Foca joked about this issue in DEFCON 18“Foca does not support Linux whose symbol is a Penguin. Foca (Seal) eats Penguins”.

Protection Against Such Inadvertent Information Compromise
Clearly, public release of documents on websites is essential. The solution to the problem lies in making sure that such documents do not cough up critical information about systems, softwares and users. Such documents should be internally analyzed before release over the web. Foca can be used to import and analyze local documents as well. It is wise to first locally extract and remove Metadata contained in documents before releasing them over the web using a tool called ’OOMetaExtractor’. Also, a plugin called ‘IIS Metashield Protector’ can be installed in your server which cleans your document of all the Metadata before your server is going to serve it.

Summary

Like many security tools, Foca can be used for good or bad. It depends on who extracts the required information first, the administrator or the attacker. Ideally an administrator would not only locally analyze documents before release, but also take a step ahead to implement a Security Policy within the organization to make sure such Metadata content is minimized (or falsified). But it is surprising how the power of information contained in the Metadata has been belittled and ignored. A reason for this maybe that there are more direct threats to security that the administrators would like to focus their attention on, rather than small bits of Information in the Metadata. But it is to be remembered that if Hackers have the patience to go ‘Dumpster Diving’, they will surely go for Metadata Analysis and an administrator’s ignorance is the Hacker’s bliss.

On the Web


                     http:// www.informatica64.com/ – Foca Official Website