Thursday, December 8, 2016

Simple Shortcuts for Docker and K8s Ops

If you frequently work with the Docker/Kubernetes (or OpenShift) stack, you may find the following shortcut commands handy:

# aliases for kubectl commands

# for operations on kube-system namespace, e.g. kubesys get pods == kubectl get pods --namespace=kube-system
alias kubesys='kubectl --namespace=kube-system'

# resource getters
alias pods='kubectl get pods'
alias logs='kubectl logs'
alias get='kubectl get'
alias desc='kubectl describe'
alias svc='kubectl get svc'
alias rc='kubectl get rc'
alias rs='kubectl get rs'
alias dep='kubectl get deployment'
alias nodes='kubectl get nodes'

# edit/delete ops
alias del='kubectl delete'
alias deldep='kubectl delete deployment'
alias editdep='kubectl edit deployment'
alias edit='kubectl edit'

# open a shell to a running pod
alias kssh='kubectl exec -it'


# aliases for Docker control/management commands

# list Docker images
alias dimg='docker images'

# clean dangling images (https://github.com/docker/docker/issues/8926)
alias dclean='docker rmi -f `docker images -f "dangling=true" -q`'

# start a new container with a shell, and discard it after exit from the shell
alias drun='docker run --rm -it --entrypoint=sh'


# service control for an all-in-one K8s node (master + minion on same machine)

# starting the full stack
alias kreboot='for i in etcd flanneld docker kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet; do sudo service $i restart; service $i status; done'

# stopping the full stack
alias kdown='for i in etcd flanneld docker kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet; do sudo service $i stop; done'

# checking status of all required services (any unavailability will be displayed in red)
alias kstat='for i in etcd flanneld docker kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet; do service $i status | grep -B2 "Active:" | grep -v "Loaded:" | grep -E "inactive|exited|$"; done'

Sunday, December 4, 2016

Too Many Friends Spoil TheFacebook: The (Less Painful) Way to Unfriend Multiple FB Friends

Your Facebook connections can easily grow in size to thousands, and often it is too late when you recognize that you actually know only a few dozens of them personally! If you face such a scenario and are trying to find an easy way to unfriend as many of them as possible, this post is for you.

Due to obvious reasons, FB does not offer a bulk unfriend feature (and will probably not offer one in years to come). Moreover, due to privacy concerns, many people are still afraid of using the bulk unfriend apps and scripts (though they are quite plentiful these days).

This post gives you a pretty good overview of the options available for bulk unfriending on FB. One of the easiest way you can unfriend multiple friends with least overhead (in terms of the number of clicks and page refreshes) is via the Friends page on FB mobile site. However, as of now, even this involves 2 clicks per each friend: the Friends button followed by the Unfriend list entry. With thousands of friends to go through (since you never know when a real friend would pop up inside the vast list of fakes and unknowns), this could easily put you down.

However, combining the above approach with the following script, you can cut down the process to a simple click of the Yes/No button for each friend; or, more conveniently, a press of Enter (yes) or Esc (no) keys. The script is nothing fancy; no explicit access to your FB account or anything, just plain JavaScript that manipulates the page DOM.

// run on the page https://x.facebook.com/friends/center/friends/?mff_nav=1
btns = document.querySelectorAll("button._56bt");
i = parseInt(prompt("resume from"));
for (i += (1 - i % 2); i < btns.length; i += 2) {
	btns[i].click();
	go = prompt("Unfriend " + btns[i].parentElement.parentElement.parentElement.parentElement.parentElement.children[1].children[0].children[0].textContent + "?");
	if (go != null) {
		if (go.length > 0) {
			break;
		}
		document.querySelectorAll('a[data-sigil="touchable touchable mflyout-remove-on-click m-unfriend-request"]')[0].click();
	}
	console.log(i);
}

For use, navigate to the Friend list page (https://x.facebook.com/friends/center/friends/?mff_nav=1), open the browser's web console (F12), paste the script and press Enter.

At startup, the script asks for a starting index, which can be useful if you want to stop the process and resume it at a later time. The script prints a numeric index after each confirmation, and you can simply provide the last-printed index to resume from that point. You can stop the process anytime by typing some text into the input field of the confirmation dialog and pressing OK (in which case nothing would be done to the friend currently being displayed). (Please note that this index would no longer be valid if you have reloaded the page after unfriending some friends, in which case the unfriended ones would already have been removed from the list.)

Before you run the script, make sure that you scroll down the page as far down as required (the page only loads a limited number of friend entries, and dynamically loads more friends only after you have scrolled to the bottom).

Disclaimer: The script has been tested only on Firefox browser so far. As always, use at your own risk!

Monday, November 21, 2016

Selenium-Chrome Matchmaking: Syncing ChromeDriver and Selenium Server Versions

If you use Selenium Server in your browser automation tasks (either as a standalone JAR executable or via a manager like webdriver-manager for Protractor or NightWatch), you may already come across the error <browser> version must be (>|<)= x.xx. While upgrading/downgrading the installed browser may appear to be a straightforward solution, you can simply upgrade/downgrade the Selenium driver for the respective browser at a fraction of the overhead.

For example, for a Google Chrome 48.0.2564.79 installation against Protractor 4.0.10 (which installs ChromeDriver 2.25 by default, subsequently leading to the error Chrome version must be >= 53.0.2785.0):

  1. Find the version of the Selenium driver compatible with the installed Chrome version. You can find the compatible Chrome versions for different ChromeDriver releases at https://chromedriver.storage.googleapis.com/2.25/notes.txt (change "2.25" to a newer version if your Chrome version is newer than what is listed there).

    The installed driver version would be displayed along with the error message you encountered, so it would be easy to determine whether you need to go up or down in the driver version hierarchy. However, newer release notes usually include compatibility details of all previous versions as well, so checking the latest version changelog would almost always be sufficient.

  2. ChromeDriver 2.20 seems compatible, so let's install it:

    webdriver-manager update --chrome --versions.chrome=2.20
  3. Now, specify the appropriate ChromeDriver version when starting Selenium via webdriver-manager:

    webdriver-manager start --versions.chrome=2.20

    If webdriver-manager is unavailable, you can simply download the required ChromeDriver version and use some technique to pass the following parameter to the Selenium server startup command:

    -Dwebdriver.chrome.driver=/path/to/the/desired/chromedriver_version

Friday, September 30, 2016

Taming VirtualBox: A Few Handy Tips

VirtualBox (referred to as VB hereafter) has become a popular tool for managing virtual machines. While most VM-related tasks in VirtualBox are trivial, there are certain facts, tips ans tricks that can make your life easier.

  • Change your VirtualBox home directory.
  • By default, VirtualBox sets up a VirtualBox VMs directory in your user home. While this can be handy for general usage (for reasons like not requiring a volume mount for accessing your VMs), it is preferable to change the location to a different disk partition (especially if your VMs are expected to persist large amounts of data to disk, such as application traces or Docker image builds) to avoid surprises like "disk full" errors at unexpected (and often critical, as per Murphy's Law) times. In addition, changing the directory name to avoid whitespace can make your life easy, especially on Linux-based systems that generally do not like whitespace in file names or paths.

  • VB eating up your disk space? Shrink your disk images.
  • Many prefer VB's dynamically allocated disk format over the fixed-size one, as it eliminates unnecessarily allocating disk for the full virtual disk capacity while still retaining the potential to grow to that size eventually. However, it still has the downside of not shrinking back despite the deletion of content inside. Hence disk shrinking has to be done manually. Although the process may take a considerable time and may not be entirely friendly for your physical hard disk, it would often yield surprising results.

  • Clean up your VMs regularly.
  • Often the main cause of huge disk images is the accumulation of garbage files, such as logs, traces (e.g. for Oracle DBs), downloaded packages (e.g. /var/cache/apt/archives in systems using apt package manager, /var/lib/docker/tmp containing failed download artifacts in Docker installations) and residue from dev builds (e.g. untagged Docker images). Remember, a dynamically allocated VDI file won't grow unless the already allocated space is fully utilized, hence you reduce the chance of ending up with a huge VDI by doing regular clean-ups as VirtualBox would be able to fit future content into the already allocated but freed-up disk (file) space.

  • Use sparse files for disk image backups.
  • If you back up your workspace with a command like rsync, each 'dirty' virtual machine image file would get copied all over again during every backup cycle, significantly increasing backup time and disk usage while possibly shortening the life of both backup and backed-up media. Moreover, since virtual disks are usually dynamically allocated and hence sparse (consuming just enough space to accompany the current content rather than the entire virtual capacity), backing up the entire disk image file often means an unnecessary transfer of huge zero-valued byte blocks (given that your VM has not grown to its maximum capacity yet, or that you have performed a cleanup but have not yet shrunk your VM to propagate the effect to the physical disk).

    Posts like this and this describe a smarter approach where rsync just copies the changed data (blocks). It basically creates a set of sparse files initially for any files that have not been copied so far, and later runs an in-place update of the files already existing on the backup medium (including the ones created in the aforementioned step) where only the modified blocks get transferred. (Disclaimer: I have not yet validated the speedup gained by this strategy.)

  • Use --type=headless at startup.
  • Often you do not want the started-up VMs to have a visible video console, especially if they merely run some service accessible over the network, or if they offer remote login (e.g. ssh). In such cases you can avoid the cluttering of your workspace (desktop) by starting the VM in headless mode (VBoxManage startvm --type=headless "VM name" in console, or via drop-down next to Start button > Headless start in the UI), where the VM would run in the background without showing any display output window. If you are not sure whether you would need to access the display output later on, you can always go for the detachable start mode (similar to the aforementioned steps) where the VM initially starts on the fronted but can later be sent to the background (by attempting to close the display output and selecting Continue running in the background on the resulting dialog). (The same can be done to hide a headless VM after clicking the Show button while the VM is running.)

  • Disable swapping on your VMs.
  • Swapping usually happens when your VM runs out of memory. With the additional layers of abstraction and overhead of the virtual disk, swapping simply becomes a nightmare over time (probably because, although the VM accesses the swap area in a memory-mapped fashion, the underlying VDI file is part of the regular filesystem and cannot compensate for the required access patterns). In case of Linux-based systems, swapping can easily be disabled at installation time, or by removing swap mount entries from /etc/fstab on an already installed system. Since a VM is usually used for a specific purpose (e.g. running a database server), allocating enough physical memory to be used instead of swap should not be a problem.

For more advanced VB tricks like creating immutable and 'resettable' VMs, visit IntegrationNotes.org.

Saturday, August 27, 2016

Feeding Your Fish with Fake Food: Imitating a Wireless Network

Creating a dummy network may be useful at times where you want to replicate a local network at a different location. For example, the network configuration of one of my development set-ups is dependent on my workplace Wi-Fi network, and I wanted to use the same configuration at my residence as well (for working during out-of-office hours).

This is how I solved the issue on Ubuntu 16.04:

  1. created a new Wi-Fi network (say Dummy)
  2. configured settings of the new network to be identical to that of the workplace network config
  3. (Now, although the config is present in /etc/NetworkManager/system-connections/Dummy, I cannot connect to it since there's no physical network.)
  4. converted the network to a Wi-Fi hotspot by editing /etc/NetworkManager/system-connections/Dummy and replacing mode=infrastructure with mode=adhoc
  5. (Initially creating a Wi-Fi hotspot and later changing its configuration (as suggested in some forums) did not work in my case, as the network kept on getting reverted back to infrastructure mode during the activation of the hotspot.)

Now, when I turn on my Wi-Fi adapter and connect to the Dummy network (usually via the "Connect to hidden Wi-Fi network" option on the NetworkManager applet menu, as the new network is not visible on the available networks list most of the time), my set-up works just as if it was connected to my workplace network.

Reincarnation of a Silicon Soul: Migrating a Physical System to a VM

If you have watched the movie Chappie, you would certainly remember how the self-conscious robot Chappie transferred his creator's consciousness—and ultimately his own—to another robot carcass. While such inception and transference of human consciousness may still seem far-fetched, trasferring the 'consciousness' (OS, applications and data) of a physical computer across is quite possible, at least into a target virtual machine.

Until recently I was using a fairly old HP430 laptop (Core i5, 4GB RAM, 500GB HDD, running Ubuntu 14.04), of which I was making full backups (i.e. of my work and, strange as it may sound, the system partition; using the handy rsync tool:

sudo rsync -a --exclude=lost+found --exclude=media --exclude=mnt --exclude=proc --exclude=run --exclude=sys --exclude=tmp / /media/janaka/SYSTEM/
sudo rsync -a --exclude=sysbackup --exclude=AndroHax --exclude=Desktop --exclude=Documents --exclude=Downloads --exclude=Music --exclude=Pictures --exclude=Public --exclude=Templates --exclude=Videos --exclude="VirtualBox VMs" --exclude=workspace --exclude=Documents --exclude=.android --exclude=.cache --exclude=.eclipse --exclude=.grails --exclude=.IntelliJIdea13 --exclude=.local/share/zeitgeist --exclude=.m2 --exclude=.mozbuild --exclude=.npm --exclude=.Private_encfs /home/janaka/ /media/janaka/SYSTEM/home/janaka/

When I recently switched to a new machine, I gave away the old machine to a friend. However, I didn't really want to part with my old machine, so I thought of trying to restore my old machine into a VM. Since I already had the (almost) complete status of the machine, it should not have been impossible.

Well, it wasn't that easy (partly due to a few blunders I made) and I could not still log in to my account using the GUI (though the guest login works fine), but the VM runs fine and I can log in and work on the tty. Anyway, the steps I followed may (hopefully) be useful for someone else stepping into the same waters.

Not intending to take the trouble of setting up boot configurations from scratch, I started off with an existing VDI with Ubuntu 14.04 minimal installed, and stripped off all content existing on the disk by mounting it via an Ubuntu live CD session:

sudo rm -r bin boot dev etc home lib lib64 opt sbin srv sys usr var initrd.img vmlinuz

Next, with the qemu-nbd tool I mounted the disk on the host and started replacing all its content with the rsync backup I had made (already mounted at /media/janaka/SYSTEM):

sudo modprobe nbd max_part=16
sudo qemu-nbd -c /dev/nbd0 /media/janaka/Stuff/VirtualBox/Test/Test.vdi
mkdir /tmp/part1
sudo mount /dev/nbd0p1 /tmp/part1/
sudo rsync -a /media/janaka/SYSTEM/* /tmp/part1/

Unfortunately it turned out that the VDI I used wasn't large enough to accommodate the full backup content. I resized the VDI with VirtualBox's Virtual Media Manager, and tried to expand the partition already existing on the disk, after deleting the unwanted (swap and home) partitions from the original VDI:

sudo parted
select /dev/nbd0
rm 2 5
resizepart 1 18000
quit

Although both parted and its GUI pal (gparted) seemed to resize the partition (and had successfully eliminated the deleted partitions), the size change seemed nonpersistent as the disk size kept falling back to the previous value as soon as the QEMU mount was removed. After some desperate struggling, I luckily came across a set of articles (particularly the last part of this answer that dealt with a few similar issues, and a single command finally did the trick:

sudo resize2fs /dev/nbd0p1

After resize I proceeded with rsync, and things went fine.

sudo mount /dev/nbd0p1 /tmp/part1/
sudo rsync -a /media/janaka/SYSTEM/* /tmp/part1/
sudo qemu-nbd -d /dev/nbd0

However, when trying to boot the resulting VM, I ran into a GRUB issue:

error: no such device: myoldphysicalharddisksuuidwasdisplayedhere

Rather than following the regular "mount all dirs and run update-grub" approach outlined in many forums (e.g. this AskUbuntu post) I went ahead and directly modified /usr/lib/grub/grub-mkconfig_lib on the VDI ("remove the search lines for..." section mentioned in the question section of the above post).

sudo vi /usr/lib/grub/grub-mkconfig_lib

... find and comment out occurrences of this part
# if fs_uuid="`${grub_probe} --device ${device} --target=fs_uuid 2> /dev/null`" ; then
# echo "search --no-floppy --fs-uuid --set ${fs_uuid}"
# fi

... save and quit
:wq

And voilà, the VM started booting up successfully!

I managed to log into a guest session via the GUI (Unity), and to my account via a terminal session (logging into my own account gives a blank desktop, very similar to this issue). Most of my services were intact, including Apache, MySQL, VirtualBox and Docker, and using the latter I even managed to run a container in my "newly virtualized" system!

Monday, July 18, 2016

Taming the Linux Beast: Adding My Two Cents

This is not a "Linux 101 Guide"; rather, it is a collection of my ideas and experiences on how I (slightly) improved my working efficiency under Linux (particularly Ubuntu) during my short life as a Linux enthusiast.

  • Save frequently used commands.

Say you found a nice (not nice) command which, unfortunately, has a truckload of parameters; rather than looking up the documentation (man, --usage or --help) every time you need to figure out the right combination, write a small script to encapsulate the whole command (e.g. dlsize.sh for wget --method=HEAD -S -O - url, which attempts to retrieve the size of a download without downloading it in the first place). Better still, define an alias (alias dlsize='wget --method=HEAD -S -O -') so you can simply say dlsize url in place of the lengthy expansion.

  • man is your best friend.

Many users have (almost) forgotten that a whole pile of tool and program documentation comes bundled with any decent Linux distro, available at the expense of just one command--man. In addition to the advantage of not having to take your hands off the keyboard, man also allows you to search the manuals on-the-fly for any specific parameters or features you require, and often provides examples for common command use cases.

  • Avoid the urge to sudo.

Whenever their command fails, many users involuntarily retry it with sudo prepended. While this works most of the time, it can easily turn out to be disastrous from safety (you'll get no Permission denied warnings if you accidentally sudo rm -r /bin), security (running a malicious app or script--or even a legit one with an unfortunate corner-case bug--with sudo would allow it to wreak havoc on your system) standpoints.

A good example is docker. In its default installation, Docker not add the current user to the docker group (membership in the group is a requirement for connecting to the Docker daemon). When many users don't know this and run into the traditional error Cannot connect to the Docker daemon, they try the sudo approach and, when they find it working, stick to it for life. The proper approach, however, is to add yourself to the docker group (a simple sudo usermod--yes, here we need a (one-time) sudo) and perform a new login cycle for loading the new user info, after which you can easily use docker without the sudo hack.

  • Mind your file permissions and ownerships.

Whenever you find some file inaccessible due to lack of permissions, don't just blindly (sudo) chmod 777. Some files are not meant to be readable, modifiable or executable by some parties, which is why Linux has taken the trouble of defining three types of permissions (rwx) under three levels of isolation (ugo). Same goes for ownership (chown). In fact, proper maintenance of both these aspects is a key trick in minimizing application and system errors, not to mention unpleasant mistakes of deleting or modifying files.

  • Ease your life with symlinks.

Symlinks can save lots of time if you have to make some 'dynamic' file available at a different location on the filesystem. A good example is a development build process, where you want the build artifact (e.g. WAR file) to be available at a different location for testing. A useful property of symlinks in this case is that, although symlinks are invalidated when the corresponding files are deleted, they automatically become valid again when the files get regenerated, so you don't have to worry about them once they are in place.

  • Read the messages. And the errors. And the logs.

More than half the problems encountered in Linux can be solved simply by looking at the messages printed out upon the fault. Almost all of the rest, you just have to check the respective error logs. It is surprising to see how many people are accustomed to just run a command and google "$command not working" when it fails. A little bit of attention can sometimes save you hours of frustration.

  • Continue to nurture your list of Linux shortcuts and tricks.

With the plethora of resources--inline help (-h, --help, --usage), manpages, logs, online articles, forums and tutorials, it is almost impossible to find an 'unresolvable' problem in Linux. While it is impossible to memorize all this stuff, often it is a good idea to persist the sources in some way (e.g. custom scripts, aliases, browser bookmarks, sticky notes). Meanwhile it is good (and fun) to try and find ways to fix whatever issue you come across during your day-to-day Linux encounters; it would make your life easy and help you admire the simplicity and elegance of Linux.

Thursday, May 5, 2016

Harmonizing UltraESB and FTP/S: Troubleshooting Connectivity

The UltraESB has built-in capability of communicating with FTP, FTP/S, SFTP and local filesystem (file://) endpoints. While setting up the other file transports might be easy, FTP/S can be problematic, especially if you start from a fresh FTP/S installation.

Assuming a local FTP/S server setup (e.g. vsftpd) with a user ftp having password ftp and home directory /srv/ftp (quoting vsftpd default settings), you might encounter a few problems when trying to submit files to an endpoint URL such as ftps://ftp:ftp@localhost/srv/ftp/, on the UESB side.

These usually occur at FTP level, so in order to find the actual causes you will have to enable TRACE logging on the org.adroitlogic.ultraesb.transport.file category in $ULTRA_HOME/conf/log4j.properties:

log4j.category.org.adroitlogic.ultraesb.transport.file=TRACE

With TRACE enabled as above, the UESB log will contain a trace of all FTP commands being executed during the transfer attempt, as:

2016-05-04 22:57:55,043 [-] [primary-1] [system] [000000I] DEBUG FTPSConnector Connecting over FTPS to URL : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no

220 (vsFTPd 3.0.3)
AUTH TLS
234 Proceed with negotiation.
PROT P
200 PROT now Private.
USER ftp
...

If you see 530 Anonymous sessions may not use encryption. (usually followed by a stacktrace on the UESB):

...
234 Proceed with negotiation.
PROT P
200 PROT now Private.
USER ftp
530 Anonymous sessions may not use encryption.
QUIT
221 Goodbye.
2016-05-04 22:57:55,381 [-] [primary-1] [system] [000000W]  WARN FTPSConnector Login failed for FTPS server : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no
PORT 127,0,0,1,179,254
2016-05-04 22:57:55,385 [-] [primary-1] [system] [000000W]  WARN FTPConnector Error copying file to URL : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no
org.apache.commons.net.ftp.FTPConnectionClosedException: Connection closed without indication.
	at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:313)
	at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290)
	at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479)
	at org.apache.commons.net.ftp.FTPSClient.sendCommand(FTPSClient.java:535)
...

you would want to allow anonymous SSL sessions on your vsftpd.conf:

allow_anon_ssl=YES

(Note: Make sure you restart vsftpd after every such change!
service vsftpd restart # as root)

If the issue is 522 SSL connection failed: session reuse required:

...
234 Proceed with negotiation.
PROT P
200 PROT now Private.
USER ftp
331 Please specify the password.
PASS ftp
230 Login successful.
TYPE I
200 Switching to Binary mode.
PASV
227 Entering Passive Mode (127,0,0,1,161,222).
STOR /srv/ftp/ftps/send.msg
150 Ok to send data.
522 SSL connection failed: session reuse required
2016-05-04 22:59:07,418 [-] [primary-1] [system] [000000I] DEBUG FTPConnector FTP copy failed to : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no
2016-05-04 22:59:07,429 [-] [primary-1] [system] [000000W]  WARN FileTransportSender Error sending message via file transport sender to : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no

adding/updating require_ssl_reuse=NO in vsftpd.conf would resolve it.

On the other hand, if authentication fails mid-way in the SSL handshake process:

220 (vsFTPd 3.0.3)
AUTH TLS
234 Proceed with negotiation.
2016-05-04 23:05:26,907 [-] [primary-2] [system] [000000W]  WARN FTPSConnector Error connecting over FTPS to URL : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
...
	at org.adroitlogic.ultraesb.transport.file.FTPSConnector.(FTPSConnector.java:62)
	at org.adroitlogic.ultraesb.transport.file.FileSystemConnectorFactory.createConnector(FileSystemConnectorFactory.java:54)
...
	at org.adroitlogic.ultraesb.core.work.SimpleQueueWorkManager$1.run(SimpleQueueWorkManager.java:306)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
2016-05-04 23:05:26,909 [-] [primary-2] [system] [000000W]  WARN FTPConnector Error copying file to URL : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no
java.net.SocketException: Socket closed
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:116)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
...
	at org.apache.commons.net.ftp.FTP.__send(FTP.java:501)
	at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:475)
...
	at org.apache.commons.net.ftp.FTPClient.storeFile(FTPClient.java:1795)
	at org.adroitlogic.ultraesb.transport.file.FTPConnector.put(FTPConnector.java:169)
	at org.adroitlogic.ultraesb.transport.file.FileTransportSender.internalSend(FileTransportSender.java:80)
...
	at org.adroitlogic.ultraesb.core.work.SimpleQueueWorkManager$1.run(SimpleQueueWorkManager.java:306)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
2016-05-04 23:05:26,919 [-] [primary-2] [system] [000000W]  WARN FileTransportSender Error sending message via file transport sender to : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no

you might require a tcpdump (against the loopback interface (lo) in this case) for finding the cause, as neither the ESB logs (even with SSL debug logs enabled: -Djavax.net.debug=all) nor the vsftpd log would show the underlying cause properly:

# tcpdump -A -vvv -i lo
...
06:41:37.566325 IP (tos 0x0, ttl 64, id 14724, offset 0, flags [DF], proto TCP (6), length 83)
    localhost.ftp > localhost.60532: Flags [P.], cksum 0xfe47 (incorrect -> 0xc3fd), seq 21:52, ack 11, win 342, options [nop,nop,TS val 37556708 ecr 37556708], length 31: FTP, length: 31
	234 Proceed with negotiation.
E..S9.@.@..............t..(u..,....V.G.....
.=...=..234 Proceed with negotiation.

06:41:37.576134 IP (tos 0x0, ttl 64, id 15677, offset 0, flags [DF], proto TCP (6), length 294)
    localhost.60532 > localhost.ftp: Flags [P.], cksum 0xff1a (incorrect -> 0x9887), seq 11:253, ack 52, win 342, options [nop,nop,TS val 37556711 ecr 37556708], length 242: FTP, length: 242
E..&==@.@............t....,...(....V.......
.=...=.............W*...l....I...TG	R.1_........F>...d.$.(.=.&.*.k.j.
...5.....9.8.#.'.<.%.).g.@.	.../.....3.2.,.+.0.....2...../...-.1.........
.............\.
.4.2...............	.
....................................................................
06:41:37.576284 IP (tos 0x0, ttl 64, id 14725, offset 0, flags [DF], proto TCP (6), length 59)
    localhost.ftp > localhost.60532: Flags [P.], cksum 0xfe2f (incorrect -> 0xd702), seq 52:59, ack 253, win 350, options [nop,nop,TS val 37556711 ecr 37556711], length 7: FTP, length: 7
E..;9.@.@..6...........t..(...-....^./.....
.=...=........(
06:41:37.577983 IP (tos 0x0, ttl 64, id 14726, offset 0, flags [DF], proto TCP (6), length 62)
    localhost.ftp > localhost.60532: Flags [P.], cksum 0xfe32 (incorrect -> 0xd9ea), seq 59:69, ack 253, win 350, options [nop,nop,TS val 37556711 ecr 37556711], length 10: FTP, length: 10
	500 OOPS: [!ftp]
E..>9.@.@..2...........t..(...-....^.2.....
.=...=..500 OOPS: 
06:41:37.577995 IP (tos 0x0, ttl 64, id 14727, offset 0, flags [DF], proto TCP (6), length 118)
    localhost.ftp > localhost.60532: Flags [P.], cksum 0xfe6a (incorrect -> 0xe009), seq 69:135, ack 253, win 350, options [nop,nop,TS val 37556711 ecr 37556711], length 66: FTP, length: 66
	error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher[!ftp]
E..v9.@.@..............t..(...-....^.j.....
.=...=..error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
06:41:37.578000 IP (tos 0x0, ttl 64, id 14728, offset 0, flags [DF], proto TCP (6), length 54)
    localhost.ftp > localhost.60532: Flags [P.], cksum 0xfe2a (incorrect -> 0x0bb0), seq 135:137, ack 253, win 350, options [nop,nop,TS val 37556711 ecr 37556711], length 2: FTP, length: 2
	
E..69.@.@..8...........t..(...-....^.*.....
.=...=..

06:41:37.578230 IP (tos 0x0, ttl 64, id 14729, offset 0, flags [DF], proto TCP (6), length 62)
    localhost.ftp > localhost.60532: Flags [P.], cksum 0xfe32 (incorrect -> 0xd99c), seq 137:147, ack 253, win 350, options [nop,nop,TS val 37556711 ecr 37556711], length 10: FTP, length: 10
	500 OOPS: [!ftp]
E..>9.@.@../...........t..(...-....^.2.....
.=...=..500 OOPS: 
06:41:37.578323 IP (tos 0x0, ttl 64, id 15678, offset 0, flags [DF], proto TCP (6), length 52)
    localhost.60532 > localhost.ftp: Flags [R.], cksum 0xfe28 (incorrect -> 0x18bc), seq 253, ack 147, win 342, options [nop,nop,TS val 37556711 ecr 37556711], length 0
E..4=>@.@............t....-...(....V.(.....
.=...=..

Now we have a hint on what might be going wrong:

error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher[!ftp]

This usually occurs if the cipher suite offered by the FTP/S server is not sufficiently strong. Adding/updating

ssl_ciphers=HIGH

in vsftpd.conf would get you through this issue.

Once you have everything in place, the UESB trace would indicate a successful transfer:

2016-05-04 23:02:08,930 [-] [primary-1] [system] [000000I] DEBUG FTPSConnector Connecting over FTPS to URL : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no
220 (vsFTPd 3.0.3)
AUTH TLS
234 Proceed with negotiation.
PROT P
200 PROT now Private.
USER ftp
331 Please specify the password.
PASS ftp
230 Login successful.
TYPE I
200 Switching to Binary mode.
PASV
227 Entering Passive Mode (127,0,0,1,104,213).
STOR /srv/ftp/ftps/send.msg
150 Ok to send data.
226 Transfer complete.
2016-05-04 23:02:09,360 [-] [primary-1] [system] [000000I] DEBUG FTPConnector FTP copy completed successfully to : ftps://ftp:********@localhost/srv/ftp/ftps/send.msg?StrictHostKeyChecking=no

Monday, April 25, 2016

Validity Sensors 138a:0050 - Swipe it Like a Pro

Validity Sensors, Inc. Swipe Fingerprint Sensor (VendorID:ProductID = 138a:0050) is available on some new laptop models such as the HP Envy 15t. If you plan on installing Ubuntu Linux on such a machine, and wish to experience the luxury of just swiping your finger to log in and do sudo stuff, you would need to get the fingerprint sensor drivers installed properly.

Unfortunately, the default fingerprint driver library libfprint does not (yet) support our sensor. Even fingerprint-gui, the de-facto choice of fingerprint sensor configuration for Ubuntu, fails to detect the device under its default configuration.

Fortunately, someone patched libfprint to include support for several other devices, including our own (138a:0050). You can get the patch from here (courtesy of this AskUbuntu question) to get started.

Before the build, ensure that you have fingerprint-gui installed, following this guide.

The codebase contains an autogen.sh script that would build it for you. However, before building, you will have to have the following packages as build prerequisites (as per Ubuntu 16.04; the versions may vary):

xorg-server-source
autoconf
automake
libtool
libusb-1.0-0-dev
libnss3-dev
libglib2.0-dev
libpixman-1-dev

For building, you can either use ./autogen.sh or the following command sequence:

autoreconf --install --force
autoconf
./configure
make
sudo make install

If the last command installs the libraries at /usr/local/lib/ (the location is usually displayed towards the end of the build output), you would have to manually symlink /usr/lib/libfprint.so.0 (currently linked to the existing library in /usr/lib/) to /usr/local/lib/libfprint.so:

sudo ln -sf /usr/local/lib/libfprint.so /usr/lib/libfprint.so.0

Now fire up fingerprint-gui and verify that the fingerprint device is detected properly and you can configure and test it. You will also be able to set up fingerprint-based login via Ubuntu's user account settings (might require a system restart).

If you have trouble running fingerprint-gui as a regular user but no problems in superuser mode, you may need to make the corresponding USB device entry accessible for the relevant user/group, or make it universally readable and writable:

sudo chmod 666 /dev/bus/usb/BusID/DeviceID

Please post your tips/issues/concerns regarding the 138a:0050 fingerprint sensor here, for the benefit of those who already have it, as well as everyone destined to get hold of one in the near future!

Tuesday, March 22, 2016

Torrenting on Android: Squeezing Your Left-over Mobile Data

Many of you may be using mobile data subscriptions with daily quotas, resetting usually at midnight. While heavy users may drain the quota easily, light surfers (like myself) may see several megabytes of data being unclaimed each day.

Utilizing such excess data for torrents is a common trick used by movie fans and many others. Currently good torrent clients are available for almost every platform, including Android smartphones. Flud is one such great Android torrent client in widespread use. So if you have an Android phone with a data subscription, there's no need to keep your computer running overnight as the phone is fully capable of catering to your torrent needs.

However, a naturally arising problem is at what time the torrent client should be launched, and when it should be stopped. The problem is that one's data usage pattern is variable, so you cannot exactly decide for how long you should allow it to run. As the quota usually resets at midnight, manually launching the client at, say, 11 PM and closing it at midnight is also not very intuitive, unless you're nocturnal and you have a really good memory.

My approach to solve the issue goes as follows:

Revising our requirements:

  1. start the torrent client at a late hour
  2. stop the client (or disable data) as soon as the quota is hit
  3. shut down the client right before midnight so as not to ruin the quota for the next day

The second requirement is fairly easy to satisfy, using a custom quota app like Data Lock (or the data limit option of the native data usage monitor, if you're fortunate enough to have a daily limit option there). With Data Lock, all you have to do is to add a data plan of the expected volume, set the reset time (e.g. daily at 00:00), and engage the options "Disable data on reaching plan limit" and "Enable data on plan renewal" so as to eliminate all manual work of the enable-disable cycle.

For the first requirement, you can use any of the various task scheduler apps. Tasker is the undisputed leader, so we'll go with it. Just create a new time-based profile for 11 PM, and add a "launch app" task for Flud under it.

All sounds fine, but how to set up the third requirement? I tried a few approaches, but to no avail:

  • the native "kill app" option on Tasker does not work for Flud
  • the TaskKill Tasker plugin does close Flud, but only under the "force stop" options (which is unacceptable as Flud is a torrenr client and requires time to quit gracefully)
  • quitting by automating menu commands (Menu > Shutdown) via AutoInput does not work on older Android versions (e.g. on 4.2.2 which is what I have)

Having a look at the LogCat output, however, we see that an Intent is broadcast when you quit the Flud app via the menu:

V/ActivityManager( 561): Broadcast: Intent { act=com.delphicoder.flud.SHUTDOWN flg=0x10 } ordered=false userid=0 callerApp=ProcessRecord{43470a20 25617:com.delphicoder.flud/u0a10110}

Reproducing this broadcast does indeed close the app! What we need is a new Tasker profile for, say, 11.59 PM, triggering a task that sends an com.delphicoder.flud.SHUTDOWN intent to the broadcast receiver (other "target" options do not seem to produce the intended result).

So, all that remains now is putting together the ingredients:

  • Flud with some torrents added, having been shut down while the torrents are active
  • Data Lock (or another data limit app), with a proper data profile
  • Tasker, with the two profiles in place for starting and stopping Flud at appropriate times

Set them up, sit back, and enjoy your torrents at no extra cost!

Did Android SDK Update Ruin Your ADT? Don't Panic!

Despite the new Android Studio hype, many are still sticking to the old Eclipse+ADT (Android Developer Tools) set-up. However, despite its maturity, ADT can still cause numerous headaches to an average Android developer. One notable case is the compatibility issue that arises when you update the Android SDK without first installing the corresponding ADT updates. In several scenarios described on StackOverflow and other programmer forums, some devs have been so messed up with this issue that they have finally reinstalled everything from scratch.

Recently I too stumbled into the same issue when I updated my SDK to v24. My ADT is fairly old (v22.6.2) but had been serving me well until the said SDK update. After the update, Eclipse started complaining "The Android SDK requires Android Developer Toolkit version 23.0.0 or above", the common symptom of a streak of troubles. Adding the update URLs as suggested in most forums did not solve the issue, and I didn't want to lose the SDK update either.

Looking back at the error message:

The Android SDK requires Android Developer Toolkit version 23.0.0 or above

23.0.0... Hmm... Looks like the SDK explicitly demands the exact version 23.0.0. So the string 23.0.0 would probably be lying around somewhere among the updated SDK files.

Easiest way to find out?

cd /usr/lib/sdk
grep -rI "23\.0\.0"

Gotcha!

tools/lib/plugin.prop

File says

plugin.version=23.0.0
and that's the only occurrence.

Let's try changing it as

plugin.version=22.6.2
and restarting Eclipse.

Damn, the error is gone! So, is that it?

Not so fast... Something's still wrong, and I can't run any apps yet.

When I try to open the DDMS perspective, I see the culprit:

[2016-02-27 19:33:33 - DDMS] DDMS files not found: /usr/lib/sdk/tools/hprof-conv

So where is hprof-conv? This is one case where Google has moved one of its tools away from its prior location,

In Linux-based systems, fixing this is pretty easy; I'll just create a symlink at tools/hprof-conv pointing to platform-tools/hprof-conv:

usr/lib/sdk/tools$ ln -s ../platform-tools/hprof-conv hprof-conv

Relaunch Eclipse, and you're all set for your next Android adventure with the latest SDK!

Caution: These steps were formulated specifically for ADT v22.6.2 and the Android SDK v24 update. Depending on the versions you have, you may need to go through more or less work, or not be able to get things running at all.

Saturday, February 6, 2016

Accessing Google Docs: The Lightweight Way

Google Docs, or GDocs, have almost become a part of the daily routine of many people. With the "cloud is everything" hype, people are moving away from traditional desktop apps to online office automation software like GDocs to save themselves from the trouble of having to leave their web browsers even for a few minutes.

Unfortunately, GDocs is arguably one of the most bandwidth wasters in the Google Apps product range. This is particularly severe if you have a weak internet connection. In my case, on Firefox I can never edit a document while on a 3G mobile broadband data connection, as the editor always falls back to the offline stage a few seconds after loading. (Chrome is a bit better as it allows offline editing even when the connection is down, provided that you have installed the relevant GDocs browser app.)

The bigger issue is that whenever you just want to read a document, GDocs has to load the full editor (the fancy UI, all sorts of JS and CSS libraries, etc.) which can sometimes amount to 3-4 MB on a fresh load. To make things worse, it maintains a persistent connection with the backend to pull updates if someone else happens to be editing the document simultaneously. On a document with several people working, this can lead to serious bandwidth usage and even CPU/memory hangs on extreme cases.

However, if you just need to view or read the GDoc, there are far more simple and lightweight alternatives. These are generally mobile interfaces, what you would see if you visited the document in your mobile phone web browser. While they lack the fancy UI elements, editability, interactivity and collaboration features of the standard desktop site, they consume just a few kilobytes on load (not counting the actual document content) and provide you a hassle-free way to access the documents read-only.

To switch to the mobile view from the standard view, replace the trailing /edit?... portion of the respective GDoc URL with the following: