Lossy music might be bad for your health

Lossy music might be bad for your health (people in general), or maybe it’s just me. I know it’s bad for my health; the only question that remains is how common or uncommon my situation is. I have conducted numerous blind experiments on myself, to eliminate the placebo effect or other psychological factors, and the results are absolute: 100% certainty, that certain forms of lossy compressed music make me physically ill. Also, it’s not about music quality – as you will see below – even though I can’t distinguish between the compressed and uncompressed music, the lossy compression will make me sick; and, in some cases the lower quality uncompressed music is fine while the higher quality compressed music will make me sick.

It started around 2003 when I decided to rip all my CD’s to MP3. After some days of listening to music, I noticed, each day about 20-30 minutes after I started the music, I would get a headache, so I shut it off and took a nap. Again and again it happened, but at first, I couldn’t believe it was the music. I decided to look into the music as a possible cause, so I started with “the Pepsi challenge” or “the Coke challenge” as it were: I ripped a bunch of CD’s to both MP3 and WAV format. I was using the LAME codec on maximum quality (320 kbps stereo). I would listen to a section of one song, pause it, then play the same section of the other. Over and over I tried different parts of different songs, trying to see if I could hear the difference. The conclusion: I could not. Except in rare cases, where a cymbal sounds like a splash of water or breaking glass, or a low drum beat sounds like it’s hollow or something. Those “artifact” cases were very rare, and very subtle – certainly not something I would care about. Still, as the days went on, I would occasionally play the MP3’s and get a headache, and I would occasionally play the CD’s and be fine. So it must be placebo, right? I devised a second test; this time a real experiment:

Since I already had several CD’s ripped to both MP3 and WAV format, I wrote a small python script that would play an album, but for each track, it would randomly pick whether it was going to play the MP3 or WAV. It wouldn’t tell me, or display anything different; it would only record its decision in a file for later review. I sat in my living room with a pencil and paper, listening. I didn’t write whether I thought the music sounded good or bad; I didn’t guess whether it was compressed or not. I wrote if I enjoyed the song, or if it made me feel bad inside, in a vague, general sense. After listening to a bunch of songs, I would then compare my notes against the actual decision of the program. The results, over dozens of tracks were astounding: There was a 100% correlation between the lossy compressed music and me feeling bad. Literally every single decision the program made, with zero exceptions, I was able to detect by feeling good or feeling bad.

So it sucked. I had discovered something about myself that I didn’t like, and nobody likes. I became the awkward person who has to ask people to shut off their music sometimes, followed by a long discussion about how it’s not music quality, but subconscious rendering blah blah, which nobody truly understands or believes when I tell them. They just think I’m a weird sensitive audiophile, which is annoying and wrong. If I go to a party and they’re playing music from an iPod or something, I’m not the party pooper who will try to ruin the party; I just leave. We’ve had some cases where people came to visit our house and played music that I had to ask be shut off. Certain stores, I cannot enter. I’ve occasionally had to ask someone to change the music when I got into their car. It sucks; it’s awkward and embarrassing every time.

It’s gotten worse over the years. Now it only takes a couple of minutes to start feeling sick. If I ignore it, by 20 minutes or so it won’t be merely a headache; I feel nauseous and dizzy, in a weird way. I use the words “headache” and “nauseous” and “dizzy” imprecisely – describing this feeling is like using the word “headache” to describe a migraine. Yes, headache can be part of it, but it’s not really a headache. It’s a difficult to describe feeling that is only most closely approximated by these words.

Denial being what it is – Again and again over the years I kept trying to convince myself it’s not real. Unfortunately, I’ve perfected the art of detecting lossy compressed music: I don’t try to listen for artifacts or quality or anything like that. The test is simple: I listen to the music, and imagine what the performers look like. If they look real, playing real instruments, the music is uncompressed. If they look like pixellated monsters, or a music video with glitchy static on the screen, it’s compressed. Nowadays I can usually identify lossy compressed music in a few seconds. After I figure out which I think it is, I ask the person playing the music, what service they’re using. The results of this have also been surprising –

MP3’s bad. AAC bad. CD, FLAC, and WAV are good. FM radio is fine (caveat, HD radio is a problem, see next paragraph). I presume Lossless AAC is ok, but I’ve never actually found anyone using it. If you burn MP3’s to a CD, back in “normal” format for a regular (non-mp3-playing) CD player, that’s also bad. The damage has already been done. Amazon, iTunes streaming services are a problem. Sirius / XM satellite radio makes me sick… Buuuutttt… Music videos on TV no problem. Music in movies on DVD or BluRay no problem. Music from YouTube no problem. I’m not talking about the coincidence of music and video – Even if I shut off the screen, or I go to a party where somebody’s just playing music… I first determine whether or not I’m ok with it, and then I ask them what they’re playing. Consistently again and again, I’ve found that music played from YouTube doesn’t bother me. The music on TV, DVD, BluRay, and YouTube are all lossy compressed formats, but they don’t cause a problem for me. What’s different between these formats and the formats that cause a problem? Most likely they use a different codec, but I’ve never been able to identify which codecs are in use on which media, or test that theory. I know certain lossy codecs make me sick, and others do not. I’m not sure which characteristic of those codecs are the problematic characteristics.

FM radio is uncompressed, much lower quality than CD’s or maximum quality MP3’s. I’ve gotten used to the idea over the years, that FM radio is fine. So a couple of years ago when we went on vacation to Seattle, we rented a car, and I put on the FM radio, and I was surprised to discover it was making me sick. I checked and re-checked, and confirmed, this is only FM radio. It should be fine. It can’t be making me sick. This can’t be real. But I couldn’t bear it; I withstood about 10-15 minutes, and then I had to shut off the radio. Later, in a parking lot, I started pushing buttons on the radio, and discovered: This radio can play both FM radio, and FM HD radio. By default, it plays HD radio, but the display still just says “FM.” You can toggle the HD on or off. So I did. It was unbelievable to me – like night and day – When playing HD, the sound quality was obviously much higher, but I felt like knives stabbing my brain. (Obviously I’m being dramatic and exaggerating). Switch to non-HD, pure relief. Adele sounds amazing and lovely. Toggle back and forth between HD and non-HD several times. Tell everyone about it. Weep internally about the future where everything is going this direction, imagining how I’ll be forced to become a reclusive hermit with no friends.

For the most part, over the last decade, I listen to FM radio and I buy CD’s, and rip them to FLAC format. That seems to work, but it’s a lot of effort. The FLAC files take up enough space that I can’t fit very many in my phone. You don’t get a lot of diversity in your music if you can’t listen to the samples before buying a CD. Unless you dump tons of money into it, you just don’t discover new things. FM radio sucks because of the DJ’s and commercials and repetition. I end up mostly just having no music in my life.

Recently, we drove an hour to visit some friends out in the woods. When we got there, the music was making me sick, but I can’t just leave, because I’m with my family, visiting friends, far away from everything. It’s important for me to stay. So I awkwardly talked to our hosts, and found they were streaming Amazon, and uncomfortably asked to shut off the music. That got me thinking:

It’s 2017. Storage costs have come way down since 2003. Networks have gotten amazing compared to a few years ago. Maybe by now there’s a streaming service that streams lossless music? I googled for it, and sure enough, Tidal exists. They offer a streaming service with lossy compressed music to compete against Amazon and iTunes, but they also offer a premium service that streams lossless.

It’s been a few months since I subscribed to Tidal, and I have been in pure music heaven all this time! :-) Streaming music absolutely changes everything. You can start playing a song, and if you like it, add it to one of your play lists. You can browse “similar artists” and discover music you never would have otherwise known. You can select from genres or playlists that they publish for you. In my phone, I can select an album, and toggle “make available offline” so it will download over wifi, and I can play in the car without consuming mobile data (or needing a 4G signal to be reliable). I absolutely love it.

I am an exceptionally self-aware introspective person, and I wonder, are other people also negatively affected by lossy compression formats? But they’re just unaware of it? Or is it just me? Now that I work at Tufts, I’m considering going to the psychology dept to ask if anyone is interested in either doing a case study on me, or perform a behavioral study on a larger audience, but I haven’t done that yet.

Maybe it’s just me, but I think there’s a very real chance other people are also negatively affected by lossy compressed music; they just haven’t connected the dots and figured it out yet.

In order to do great things, you must believe in your own greatness. But there’s a flip side – You must not become a Narcissist, elitist, or egomaniac. The balance lies in knowing your own limits, doing all you can do, and enlisting and inspiring others to help achieve goals that are otherwise beyond reach.

How to choose a fair random number in a range, or, who gets the last beer:

Alice and Bob both want the last beer, and they most certainly will not share. They each prefer to gamble, by flipping a coin. Since the outcome of the coinflip is unpredictable and fair (equal probability of each possible outcome), and since there are two people and two sides of the coin, they can easily agree on a mapping of coinflip results to selection of the person who gets the beer.

Now imagine they don’t have a coin, but they have a 6-sided die. It’s still easy to assign a die outcome to a fair selection of person-who-gets-beer, because there are two people, and an even number of sides on the die. Six divides evenly by two.

Things would be a bit more tricky, if Alice, Bob, Carol, and David were all contending for that last beer, because 6 doesn’t divide evenly by 4…

The trickiest case of all occurs when we have a very large many-sided die (20 sides in the image below) to select from a group (3 letters, or peoples’ names in the image below), and the size of the outcome group doesn’t divide evenly into the size of the die.

How many fair groups are there? How many times does 3 go into 20? The answer is 6. Divide 20 by 3, discard the remainder.

What is the largest fair number on the die? It’s the number of fair groups, multiplied by the size of each group. 6 * 3 = 18. So the way to select a fair random outcome is to roll the die, and if it produces one of the unfair outcomes, roll again until you get one of the fair outcomes. Finally, let’s make use of this in a computer:

In the computer, there’s a difference between “random” and “crypto random.” Just plain “random” is a low-cost math function, whose output looks “random” to a human, but with some careful analysis on some known section of output, the next bytes and even previous bytes can be predicted or calculated. Just plain “random” is good for toy video games and entertainment, but not good for games that have material consequence, such as gambling or hacking into national security systems (for “fun”). Crypto random has the characteristic of each individual byte being unrelated to each other (cannot be used to calculate earlier or later bytes), and each value having equal probability. In other words, getting a crypto random byte is just like rolling a 256-sided die, numbered from 0 to 255.

Realistically, most CPU’s can’t work on data smaller than 32 bits wide, so if you perform a calculation on a single byte, the CPU is actually working on 4 bytes and discarding 3 bytes. For simplicity and by way of example, let’s assume we’re working with unsigned 32 bit integers, UInt32. When we generate 4 random bytes (a random UInt32), this is analogous to a 232 sided die, with each side numbered from 0 to 4,294,967,295. To illustrate, we’ll assume you want to randomly select one of 17 people or things, numbered 0-16.

Clearly, you can calculate your Selection #, by getting the remainder (modulus) after dividing the random Die # by 17, as long as the Die # is less than or equal to 4,294,967,294. If the Die # happens to be the unfair value 4,294,967,295, then you’ve got to roll the die again.

Naturally, we don’t want to hard-code the numbers 17 or 4,294,967,294 into our program. We want to create a function that can return a random number in an arbitrarily sized, user-specified range. So inside the RandomRange function, we need to calculate the maximum fair die value, based on the size of the die, and the size of the requested range. The C# code below should translate easily into most other languages.

// Returns a randomly selected UInt32 in the range min to max, inclusive
UInt32 RandomRange(UInt32 min, UInt32 max) {
	return min + RandomRange(max - min);
}

// Returns a randomly selected UInt32 in the range 0 to max, inclusive
UInt32 RandomRange(UInt32 max) {
	// If the user requests a random value from 0 to 0, give them a 0.
	if (max == 0) {
		return 0;
	}

	UInt32 die = GetRandomUInt32();

	// If they requested a random UInt32 with literally no restriction, not
	// only do we already have a valid return value, it's also *important* 
	// that we return now, before trying to add one to max, which would be an
	// overflow.
	if (max == UInt32.MaxValue) {
		return die;
	}

	// Suppose max is 17, then the user wants a number from 0 to 17 inclusive,
	// which means the size of the range they specified is 18.
	UInt32 rangeSize = max + 1;

	UInt32 maxFair;
	if (UInt32.MaxValue % rangeSize == max) {
		// there are no unfair groups
		maxFair = UInt32.MaxValue;
	}
	else {
		// divide by rangeSize and multiply by rangeSize...
		// because these are ints, the division truncates, so after
		// multiplication, this has the effect of rounding down to the
		// bottom of the group.
		// Subtract one to get the maximum of the previous group, which
		// is the max fair value.
		maxFair = UInt32.MaxValue / rangeSize * rangeSize - 1;
	}

	while (die > maxFair) {
		die = GetRandomUInt32();
	}
	return die % rangeSize;
}

UInt32 GetRandomUInt32() {
	using (var rng = new RNGCryptoServiceProvider()) {
		var randomBytes = new byte[sizeof(UInt32)];
		rng.GetBytes(randomBytes);
		UInt32 retVal = BitConverter.ToUInt32(randomBytes,0);
		Array.Clear(randomBytes,0,randomBytes.Length); // crypto obsession
		return retVal;
	}
}

Government Encryption Backdoor Foiled by Puppet Wizards

This content has moved:
Government Encryption Backdoor Foiled by Puppet Wizards

JavaScript Cryptography Not Harmful (Counter Argument)

This content has moved:
JavaScript Cryptography Not Harmful (Counter Argument)

The System

So, here’s a quick description of how “the system” is broke. Spoiler Alert: It’s not Obamacare or Welfare.

Every economic system, be they capitalism, communism, whatever, is designed for the benefit of society. Instead of people living in isolation and being responsible for every life task, thus preventing them from specializing in anything, every economic system has people cooperating with each other and performing specialized jobs, to improve the net benefit to society. A farmer can just focus on perfecting farming. A blacksmith can specialize in blacksmithy. The butcher, the baker, and the candlestickmaker all focus on their specialties.

With every generation, the tools and techniques improve. We now have individual farmers operating powerful tools, performing the equivalent work of 1,000 farmers from yestercentury. If we follow this line of thought for a while, the economic system continues to require that every person work for a living, and yet the efficiency of each person increases, so we overproduce. After some number of generations, we reach the point where only 1 in 1,000 people needs to do any work in order to support all of society, and then 1 in a million, and then we have the logical eventual conclusion in a society where machines do all the work. They self-operate, self-repair, they self-invent, self-construct. Humans are unnecessary except as consumers. Jobs are unnecessary. But our economic system was originally designed to reward people only for their work. Somewhere along the path, the economic system was required to adapt, break down, and fail.

In present times, we are somewhere along that path. College grads increasingly stay at home with parents because they can’t find jobs. They take jobs in retail, operating cash registers, cooking fast food, driving taxis, because they are required to get a job and there isn’t enough work available in the world to keep all those people gainfully employed in their specialized areas.

But even THESE jobs are being ever more replaced by machines, or human labor reduced due to increases of efficiency, tools, or process.

People recognize the problem and debate “how to create jobs,” usually arguing over distribution of tax breaks and tax burdens. “Tax the rich,” “Give tax breaks to ‘job creators,'” “Cut welfare.” This entire debate is narrow minded and near-sighted, and misses the bigger picture, demonstrates a misunderstanding of the actual problem, and a lack of wisdom. All of these arguments assume a fundamentally functional system that merely needs tweaking of operating parameters. They neglect to address the root cause of the problem.

We have Dilbert jobs, where employees show up to do useless work just for the sake of being a body that collects a paycheck without poverty or criminality. We have 9 guys standing around watching one guy with a shovel, holding the plans upside down. A few weeks ago, the gas company sent their guy to my house to inspect the pipes. He used a sniffing device, and found a gas leak. He informed me, “That pipe right there needs to be tightened. I’ll call the guy with the wrench.” So I commented, “That must be a union job.” He said “Oh yeah. It is.” 20 minutes later, another guy came in with a crescent wrench and tightened the pipe. I had lots of wrenches right there with us in the basement, and of course, the first guy could have carried one of his own. That’s not the point. The point is, the union protects the jobs for the sake of keeping two men employed doing work whose sole benefit is to pay the men for non-criminal activity.

Don’t get me wrong – unions are very important to protect human rights in situations of exploitation and abuse. But the flip side is that not all union activity is about defending human rights and fighting exploitation and abuse. As demonstrated above, unions often protect jobs for the sake of protecting busywork. When machines are able to do the jobs more cost effectively than people, unions step in and protect peoples’ busywork jobs, opposing the machines. They fight to KEEP useless busywork, opposing progress.

A hundred years ago, a human was required to make the elevator go up and down. A human was required to light and extinguish the street lamps. Bowling pins were stood up by hand. Phone calls were connected by human operators. These jobs were all eliminated, and we as a society are better for it.

I consult at a robotics company. It is annoyingly common that we have some buyer ready to buy some robots, and then some union gets involved due to perceived threat to jobs. (A threat which is highly arguable.)

We should not be opposing progress. We as a society should get our shit together, and LET these useless jobs get eliminated, and FIND something productive for those people to do instead.

If you cannot imagine any big picture goals for the human race to accomplish, that would make you proud of being a member of the human race, then I pity you. I would like to take all those people doing useless busywork, and get them back in school, and performing arts, and developing the tools to detect and prevent extinction events such as the next big asteroid, and global climate change. Wait. If you cannot imagine big picture goals that make you proud to be part of the human race – just go watch some sci-fi. Faster than light travel. Discovery of extraterrestrial life. How about a little bit of species habitat redundancy – colonize another planet so the human race isn’t extincted when the Sun destroys Earth. Nevermind the Sun destroying Earth in some billions of years – WE are certainly going to destroy it ourselves if we can’t get control of global climate change and global overpopulation. Working on THESE problems would make me proud to be part of the human race.

“The System” of capitalism is broken because it is fundamentally designed to perpetually increase efficiency of material benefit in society, and cannot remain functional when approaching extreme efficiency of material benefit in society. There is no choice about it, the inevitable conclusion is that you must adopt methods of wealth distribution disconnected from free market consumerism.

That means you have to pay people for jobs that the free market doesn’t demand. For now, this mostly happens by Dilbert jobs, Government jobs, Busywork jobs (some of which are protected by unions), Government Grants and subsidies for various projects, and the tiniest category, much smaller than any of the aforementioned, is Welfare programs, including Social Security and Unemployment and other programs.

How To Use TrueCrypt for Raw Device Virtual Pass-Thru (using VMWare Fusion) on Mac OSX

Before reading below, please read this post-mortem summary.

I tested several configurations. One of which was the procedure below, using TrueCrypt on the host OS, to encrypt raw block device before handing the block device to the guest OS. I also used Disk Utility to encrypt a filesystem on the second hard drive, and let the guest OS reside inside that encrypted filesystem. I also did a non-encrypted block device, pass-thru to the guest OS. And here is what I have to say about it all: It’s absolutely confirmed, that if the guest OS resides inside a Mac filesystem, then the host OS uses memory caching it. For most situations, that’s a bad thing (because the host is double-caching the same stuff that the guest is also caching), but if you constantly reboot your guest OS, then it’s a good thing (because the host is able to cache stuff while your guest OS has its caching systems offline or cold.) However… If passing the raw block device to the guest OS (with or without TrueCrypt), despite improved memory usage, the guest OS seems to simply become jittery. For best combination of performance and security, I’m recommending, use Disk Utility to encrypt the second hard drive, and then let the guest OS reside inside the encrypted Mac filesystem.

In the past, I’ve run Fusion or Parallels in the mac, and I let the guest hard drive sit as a local file within the Mac filesystem. You know. The way they expect you to use it. But I don’t like this for several reasons, the first of which is that the Mac uses its memory to cache & buffer the guest OS hard disk, which the guest OS is already doing itself, so it’s a big fat waste of memory. There is filesystem overhead, which logically must reduce performance. You have to exclude the guest from Time Machine, and Spotlight, etc. So the conclusion I’ve reached is that I would much rather use a raw second hard drive (or partition) passed directly to the guest OS. No Mac filesystem in between.

I am currently using Mac OSX 10.9 Mavericks, VMWare Fusion 6, to run a Windows VM using a raw partition of a second hard drive. My host OS uses FileVault, but without any Mac filesystem on the second hard drive, it’s naturally left unencrypted. So naturally I want to add encryption, and make it automatically mount. Below is a description of how to do what I do. The only thing about it that I don’t love is the prompt, every time you launch the guest OS, “VMware Fusion requires administrative privileges for accessing Boot Camp disks. Type your password to allow this.” I am not using a boot camp disk, but apparently that’s just how they identify it internally. I would prefer for the guest OS to simply work, without needing me to type in the password.

  1. Using Disk Utility, I partition my second hard drive (because I want to, but don’t have to; I could have just as well used the whole second hard drive.) Even though I will not be using a Mac Filesystem, I make all the partitions temporarily “Mac OS Extended (Journaled)” and I give at least one an easily identifiable name, so I can later easily identify the right device.
  2. Don’t Quit Disk Utility yet.
  3. Install TrueCrypt
  4. Launch TrueCrypt, and click Create Volume
  5. Create a volume within a partition/drive
  6. While selecting the device, it’s important to both identify the correct device, and dismount it from Disk Utility, before starting encryption. Also, make a note of which device it is (for example, rdisk0s3); you will need to know later. After dismounting the volume, close Disk Utility, and continue with TrueCrypt.
  7. The rest of TrueCrypt selections are self explanatory, except: When prompted about 4G files, it doesn’t matter what you choose. That’s just a convenience thing to guide people what filesystem to choose. It doesn’t matter for us, because we choose “none” for the filesystem. At this point, you’ll have to wait a while for TrueCrypt to do its work.
  8. After it’s done, launch a terminal, create a file such as /bin/MountTrueCryptVolumes.sh and make it owned by root.

    sudo chmod 700 /bin/MountTrueCryptVolume.sh
    sudo chown root:staff /bin/MountTrueCryptVolume.sh

  9. Fill that file with something like this: (I repeat, I only consider this safe because the host OS is using whole disk FileVault, and the file is locked down accessible only by root.)

    #!/bin/bash
    /Applications/TrueCrypt.app/Contents/MacOS/TrueCrypt --filesystem=none --password=TrUeCrYpTvOlUmEpAsSwOrD /dev/rdisk0s3

  10. Edit the sudoers file (hopefully you know a little vi)
    sudo visudo

    Find this line:
    %admin ALL=(ALL) ALL

    And append to it, as follows:
    %admin ALL=(ALL) ALL, NOPASSWD: /bin/MountTrueCryptVolume.sh

  11. Wait till TrueCrypt is done, then ensure the volume is dismounted, and quit TrueCrypt.

  12. Simultaneously validate that your mount script works without password, and identify the name of the decrypted volume, as follows:

    echo "before" ; ls /dev/disk* ; sudo /bin/MountTrueCryptVolume.sh ; echo "after" ; ls /dev/disk*

    In my case, my new volume created by TrueCrypt is /dev/disk3

  13. Now to add this disk to VMWare:

    Open a command Terminal, and cd to where you want to create your new vmdk virtual disk.

    Notes about the following command: If you want help, you can run vmware-rawdiskCreator -h for help. Please notice above I used the raw disk “rdisk” and now I’m using the non-raw disk “disk” … There is no choice about this. VMWare refuses to work with rdisk3; I am required to specify disk3.

    Since there is no partition map inside the TrueCrypt volume, I specify “fullDevice” but if you wanted to, you could partition the encrypted drive in Disk Utility (the disk is called “volume.dmg” for some silly reason.) And then you would vmware-rawdiskCreator print /dev/disk3 to identify the correct partition number, and specify the partition number instead of “fullDevice.”

    In your present working directory, a small file will be created that describes the disk to VMware, and references the raw disk (or partition) as the backing store. You must give this file a name. The “.vmdk” extension will be added automatically. So I specified the name “TrueCrypt_disk3_wrapper” and this created the file “TrueCrypt_disk3_wrapper.vmdk”

    And finally, I looked all around to figure out, that in VMWare if you use the GUI to create a disk, your choices are IDE, SCSI, or SATA, but on the command line, your choices are “ide,” “buslogic,” or “lsilogic,” where lsilogic seems to be SCSI and preferred for most guest OSes. If you want to know which is preferred for your OS, just try adding a disk with the GUI, to see what it selects by default.

    sudo /Applications/VMware\ Fusion.app/Contents/Library/vmware-rawdiskCreator create /dev/disk3 fullDevice TrueCrypt_disk3_wrapper lsilogic

    You must chown that file to yourself.
    sudo chown eharvey TrueCrypt_disk3_wrapper.vmdk

    And annoyingly, you cannot add this vmdk to the guest machine with the GUI. You must hand-edit the .vmx file. (Shutdown guest OS, and quit from VMWare first.) Copy the existing lines, and modify, similar to these:

    scsi0:1.present = "TRUE"
    scsi0:1.fileName = "TrueCrypt_disk3_wrapper.vmdk"

  14. And finally, finally. If you want the TrueCrypt volume to mount automatically at boot: You can edit your crontab with the command crontab -e and insert a line like this:

    @reboot /usr/bin/sudo /bin/MountTrueCryptVolume.sh

S/MIME email encryption

I’ve written some guides on how to do S/MIME email encryption. These are simple enough that most people, even non-technical, can usually get through it without assistance (or without much assistance.) Start by getting a digital id certificate from startssl.com, and then installing into outlook or apple mail, or whatever is your favorite mail client.

SimpleSMF – start and stop VirtualBox guests automatically on solaris/opensolaris/openindiana

Here it is, the Simple SMF project:

SimpleSMF: http://code.google.com/p/simplesmf/

I created iscsi-zpool simplesmf because I had some iscsi pools that needed to be mounted at bootup, but needed to gracefully dismount before iscsi initiator shutdown, and needed to gracefully fail if the target were unavailable at boot.

I created virtualbox-guest-control simplesmf, because the next two leading products (vboxsvc and the built-in smf service included with virtualbox 4.2 and newer) had some drawbacks … vboxsvc is very powerful but very complex, and the complexities are things I don’t need in my environment. I also talked with some other people about vboxsvc, and one of them said it was only 95% reliable for him. Given the complexity and confusion, that was no surprise, so I wanted something simpler. The built-in smf service in virtualbox 4.2, I haven’t tested yet. But I’m told it starts up guests gracefully at bootup and fails to shutdown guests at shutdown. Now … Knowing what I know about SMF, I think that’s probably bunk. Probably, the guests *do* shutdown gracefully, but again, I haven’t tested.

Also, I think SimpleSMF serves as a good example for just *any* simple smf service you might want to create.

openindiana iscsi mirror local & remote

I have two servers, each with 6 disks. The first 2 disks of each are configured as OS mirror. This leaves 4 disks unused. I want each server to have a pool, 2 local disks mirrored against 2 remote disks. This wasn’t as easy as it sounds, so here’s a process that works. I have only done this on OI 151a6. I don’t know what will happen on solaris 10, opensolaris, solaris 11, or anything else.

Edit: This configuration turned out to be unstable. In the ensuing months, the servers would randomly crash in ways that made me suspect IO errors, but couldn’t really prove it. We eliminated the local-to-remote mirror iscsi configuration, and things became stable, but not high availability.

First, set up iscsi.

Now, each system has unused: c4t2d0 c4t3d0 c4t4d0 c4t5d0
Each system will export via iscsi, all four disks: c4t2d0 c4t3d0 c4t4d0 c4t5d0
Each system will initiate a connection to all eight disks, including the ones at 127.0.0.1

When creating zpool, don’t use the local device name (c4t2d0 or whatever) because that’s not available to the other system. Also, if you use the local device name instead of the multipath device name, it seems to cause data corruption on that device, as perceived by the other system using the multipath name.

Use only the multipath iscsi target device names.

Do this on both systems, to enable targets:

sudo pkg install pkg:/network/iscsi/target
sudo svcadm enable -s svc:/system/stmf
sudo svcadm enable -s svc:/network/iscsi/target

Do this on both systems, to enable initiators:

sudo iscsiadm modify discovery --static enable

Note: I’m skipping the chap bidirectional authentication in this example. You probably want to figure that out.

On both machines:

for DISKNUM in 2 3 4 5 ; do sudo sbdadm create-lu /dev/rdsk/c4t${DISKNUM}d0 ; done
for GUID in `sudo sbdadm list-lu | grep rdsk | sed 's/ .*//'` ; do sudo stmfadm add-view $GUID ; done
sudo itadm create-target

Now set up the initiator on host1:

sudo iscsiadm add static-config iqn.2010-09.org.openindiana:xxx,127.0.0.1
sudo format -e

Make a note of the new device names. And hit Ctrl-C.
Now you know which disks are on host1. Make a note, as below, “export H1T0=…”

sudo iscsiadm add static-config iqn.2010-09.org.openindiana:xxx,192.168.7.7
sudo format -e

Make a note of the new device names. And hit Ctrl-C.
Now you know which disks are on host2. Make a note, as below, “export H2T0=…”

Now set up the initiator on host2:

sudo iscsiadm add static-config iqn.2010-09.org.openindiana:yyy,192.168.7.8
sudo iscsiadm add static-config iqn.2010-09.org.openindiana:yyy,127.0.0.1

sudo format -e

Make a note of the new device names. And hit Ctrl-C

Create the following information:

export H1T0=c5t6540CB9496F540CB9496F540CB9496F4d0
export H1T1=c5t6540CB9496F540CB9496F540CB9496F1d0
export H1T2=c5t6540CB9496F540CB9496F540CB9496F2d0
export H1T3=c5t6540CB9496F540CB9496F540CB9496F3d0

export H2T0=c5t601C57BADF301C57BADF301C57BADF34d0
export H2T1=c5t601C57BADF301C57BADF301C57BADF31d0
export H2T2=c5t601C57BADF301C57BADF301C57BADF32d0
export H2T3=c5t601C57BADF301C57BADF301C57BADF33d0

Only do this on one server:

sudo zpool create iscsiTank1 mirror $H1T0 $H2T0 mirror $H1T1 $H2T1
sudo zpool create iscsiTank2 mirror $H1T2 $H2T2 mirror $H1T3 $H2T3

Now, if you want to pass the pool over the other system, just export & import.

Ok, so.
New problem.
Natural state of problem, by default:

If I don’t export the pool before rebooting, then either the iscsi target or initiator is shutdown before the filesystems are unmounted. So the system spews all sorts of error messages while trying to go down, but it eventually succeeds. It’s somewhat important to know if it was the target or initiator that went down first – If it was the target, then only the local disks became inaccessible, but if it was the initiator, then both the local and remote disks became inaccessible, and will result in data loss.

Upon reboot, the pool fails to import, so the svc:/system/filesystem/local service fails, and of course all the other services depend on it. System comes up in maintenance mode. The whole world is a mess, you have to login at physical text console to export the pool, and reboot. But it comes up cleanly the second time.

Solve the problem:

Fetch files:

iscsi-pool-ctrl.conf
iscsi-pool-ctrl.sh
iscsi-pool-ctrl.xml

sudo mkdir /root/bin
sudo cp iscsi-pool-ctrl.sh /root/bin
sudo chmod +x /root/bin/iscsi-pool-ctrl.sh
sudo cp iscsi-pool-ctrl.conf /etc/iscsi-pool-ctrl.conf

Edit /etc/iscsi-pool-ctrl.conf

Test it by manually running some commands like:

sudo /root/bin/iscsi-pool-ctrl.sh import
sudo /root/bin/iscsi-pool-ctrl.sh import
sudo /root/bin/iscsi-pool-ctrl.sh export

The second import shows what behavior will be like upon svcadm refresh or svcadm restart

sudo svccfg import iscsi-pool-ctrl.xml
sudo svcadm enable svc:/network/iscsi/pool:default

If you are using iscsi to connect to localhost, make the initiator dependent on the target as follows:
(Otherwise, during reboots, target will die before initiator, which is bad news.)

sudo svccfg -s svc:/network/iscsi/initiator:default
svc:/network/iscsi/initiator:default> addpg iscsi-target dependency
svc:/network/iscsi/initiator:default> setprop iscsi-target/grouping = astring: "require_all"
svc:/network/iscsi/initiator:default> setprop iscsi-target/restart_on = astring: "none"
svc:/network/iscsi/initiator:default> setprop iscsi-target/type = astring: "service"
svc:/network/iscsi/initiator:default> setprop iscsi-target/entities = fmri: "svc:/network/iscsi/target:default"
svc:/network/iscsi/initiator:default> exit

sudo svcadm refresh svc:/network/iscsi/initiator:default

Google wants images of my passport, driver’s license, bank statement, etc.

Farewell google. What else can I say? Seriously.

I tried to purchase something from the Google Play store, in my android tablet today, and I keep getting an error that doesn’t say much of anything. “Your order could not be processed. Please try again.” So I tried a different credit card, I tried adding new credit card, still generic error messages.

So I tried logging into google wallet via web browser. They ask for Account Verification. Which includes Driver’s License, (upload scanned images), passport, or other photo ID, bank statement, credit card statement, and/or utility bill.

Here’s what they say:

We were unable to verify the credit or debit card information for your recent order. Your order has been cancelled and your card was not charged. Rest assured that Google is committed to preserving the security of your information and providing a safe online shopping experience.

To resolve this issue, you’ll need to scan the following verification documents to your computer and then upload them below.

If you don’t have a scanner, please click here. (Fax option)

Until we receive and verify the requested documents, future orders will not be processed. Please do not create additional accounts.

If you choose not to submit these verification documents, your account will remain suspended and you will not be able to place orders or access your Google Wallet account.

They ask for my driver’s license, passport, bank statement, credit/debit card statement, utility bill.

Obviously, I’m not going to give them any or all of that stuff. Just so I can pay them $0.99 for some stupid app.

When I pay other companies, they do normal things, like, redirect me to Verified By Visa, or stuff like that. This whole process is taking place over SSL secured https… And I have a strong password and two-factor authentication on my google account… So there’s seriously no way for any fraud to be taking place either by me or anyone else trying to hack my account or anything. This is going way too far. Nobody in their right mind should give any credit card payment processing center their driver’s license, passport, bank statement, etc.

Foolish. Baaah!!! I want to play my stupid video game! 😉 Too bad…

Broken RSA Keys (part 3: openssl)

Openssl uses the RANDFILE environment variable or configuration setting in its config file to specify the location of a random seed. During key generation, this seed is combined with a few bytes from /dev/urandom, to be used as a new seed for the openssl internal pseudorandom number generator.

In most systems, you can find your own personal openssl seed in ~/.rnd, and for the purposes of this blog post, I am going to use ~/.rnd and RANDFILE interchangeably. But of course, you need to use whatever is the correct RANDFILE in your configuration. Upon first run, openssl should generate ~/.rnd for you. If you generate some key with openssl and ~/.rnd still doesn’t exist, you better dig into your environment variables and openssl config file to find RANDFILE. You’re going to need it momentarily.

Every time openssl reads ~/.rnd, it overwrites the file with a new random seed for next time. So to ensure strong entropy using openssl, all you need to do is ensure strong entropy entered into this file once. After that, you may safely assume all your openssl operations on that machine include high entropy.

This file is 1k long (8192 bits) but your openssl private key has a cryptographic strength around 128 or 256 bits (a 3072 bit RSA or DH private key has a cryptographic strength of 128 bits). Also, when openssl reads your RANDFILE, it will include additional bytes from urandom, which can only strengthen your key further. So we don’t need anywhere near 8192 bits of entropy in your RANDFILE. 32 bytes = 256 bits

There are lots of easy ways to get this wrong. You could be reading the wrong openssl.cnf file. Maybe you had a type-o when you set RANDFILE. Maybe the openssl you’re using ignores your RANDFILE environment variable. To eliminate all of these possible sources of error, do this:

  • Run your openssl command.
  • Now check your ~/.rnd file (or whatever RANDFILE) to ensure it exists.
  • Get the md5sum.
  • Run your openssl command again.
  • Get the new md5sum, and ensure it’s different from before. This will ensure you’re definitely looking at the right RANDFILE, which is definitely being used by your openssl command.

Now, overwrite that file with a new random seed:
dd if=/dev/random bs=1 count=32 of=~/.rnd

After generating a new random seed file, run your openssl command for real, trusting that you have strong entropy from now on.

Please see also:

Broken RSA Keys (part1: the problem)
and
Broken RSA Keys (part 2: fixing ssh keys)

Broken RSA Keys (part 2: fixing ssh keys)

As mentioned in a previous post, there are problems with people generating keys with insufficient entropy. This is particularly a problem for ssh, which generates the host ssh keys upon first boot, when there was probably insufficient entropy available.

If you’re generating ssh keys (ssh-keygen) you can solve the problem by using SSH_USE_STRONG_RNG as shown below. Note, in this command, it’s bytes. So 32 equals 256 bits.

To generate good SSH Keys (assuming redhat derivative linux):

sudo mkdir /etc/ssh/oldkeys
sudo mv /etc/ssh/*_key* /etc/ssh/oldkeys

export SSH_USE_STRONG_RNG=32
sudo ssh-keygen -q -C "" -N "" -t dsa -f /etc/ssh/ssh_host_dsa_key
sudo ssh-keygen -q -C "" -N "" -t rsa -f /etc/ssh/ssh_host_rsa_key
sudo ssh-keygen -q -C "" -N "" -t rsa1 -f /etc/ssh/ssh_host_key

sudo chmod 600 /etc/ssh/*_key
sudo chmod 644 /etc/ssh/*_key.pub
sudo chown root:root /etc/ssh/*key*

sudo service sshd restart

Please also see:
Broken RSA Keys (part1: the problem)
and
Broken RSA Keys (part 3: openssl)

Broken RSA Keys (part1: the problem)

Lots of stories circulating the news right now (such as this one) about RSA keys providing no security. The problem is not RSA. The problem is bad random seeds when you generated your keys. The solution: Generate new keys using good randomness.

The word for “randomness” is “entropy.” Entropy is the measure of unpredictability. A single fair coin toss represents a single bit of entropy.

For the moment, I’ll write about linux specifically. Much of this information comes from man (4) random.

/dev/random is gathered from hardware entropy sources, such as TPM and keyboard & mouse movements, and unpredictable disk seek times and supposedly unpredictable characteristics of the ethernet and hardware interrupts, etc. Since there is a limited amount of system entropy available, if you try to read /dev/random, your read will block (stall) until more bytes become available.

/dev/urandom is a pseudorandom number generator, based on hash algorithms or ciphers or similar. It is actually deterministic given the initial seed. This is a non-blocking device, so you can read infinite bytes from it as fast as the CPU can generate them. If you read enough data from /dev/urandom, it may exhaust any available entropy, and it will be reused. In other words, a pattern will emerge.

As entropy becomes available in /dev/random, it is fed into /dev/urandom. This helps to continually re-seed urandom and helps urandom to be more actually unpredictable. Basically, urandom is an amplifier of the true entropy.

Unfortunately, when a system is freshly installed, upon first boot, there hasn’t been much entropy gathered. It’s fairly deterministic. During first boot, even if you use urandom, it is only amplifying a very small amount of actual entropy. This is when your ssh keys get generated.

Clearly, you should generate new server ssh keys (and any other keys) sometime after you can assure sufficient entropy. The question is, how do you know you have sufficient entropy in your key generation process?

I’m going to answer this question in two parts, separately. Once for ssh, and once for openssl. Please see:
Broken RSA Keys (part 2: fixing ssh keys)
and
Broken RSA Keys (part 3: openssl)

selinux notes

These are my notes, after learning from Fedora Selinux FAQ

  • Become root. Although you could do this with sudo, it’s more of a pain.
    Also, you may be glad, some day, that you left these files laying around, and the best place for that is in root’s home directory (or a subdirectory.)

  • You must ensure the auditd service is installed and started.
    yum -y install auditd policycoreutils-python
    service auditd start

  • First, make sure there’s nothing in your audit log.
    audit2allow -m local -l -i /var/log/audit/audit.log
    If there is anything in there, clear it out with
    semodule --reload

  • Now, temporarily disable selinux
    setenforce 0

  • Do whatever would normally get blocked.

  • And re-enable selinux
    setenforce 1

  • Make up a new module name, such as “httpdwritehomes” and prepare that module from the list of stuff that was captured in the audit log:
    export newmod=httpdwritehomes
    audit2allow -m $newmod -l -i /var/log/audit/audit.log > $newmod.te
    Be sure to edit that file, read it over, and remove anything that doesn’t belong

  • Note: If nothing appears in the logs, you might have to disable “don’taudit” See http://danwalsh.livejournal.com/11673.html
    semodule -DB
    and later
    semodule -B

  • Now compile and install the new module
    checkmodule -M -m -o $newmod.mod $newmod.te
    semodule_package -o $newmod.pp -m $newmod.mod
    semodule -i $newmod.pp