Category Archives: computers

Arch Linux Update Script

I run several Arch Linux boxes at home. Updating these boxes involves a number of redundant actions: download new packages on each machine, install the package on each machine,  and store every previously installed package on each machine. I recently wrote a script to eliminate two of these redundancies.

The script mounts a package cache from an NFS server before running pacman, and downloads and installs the packages from the NFS mount. After finishing the install, the script unmounts the NFS share and rsyncs the currently installed packages to the local package cache. Finally, the script cleans the cache, removing all old versions of packages.

Using this script enables me to do several things:

  1. I only download new packages once. Installs on other machines use the package from the NFS cache.
  2. I can keep a full history of all installed packages in a central place
  3. I keep the package caches on my machines down to a manageable size.

You can download the script at

Hacking Fan Speed on Dell PowerEdge Servers

A couple of weeks ago, I acquired three old Dell servers to play with: a PowerEdge 840, PowerEdge 830, and a PowerEdge SC 430. One thing I didn’t think about before I got them was how much noise they might make. I’m used to desktop machines; they’re designed to run quietly enough not to annoy someone trying to work in the same room. Dell doesn’t go to the same trouble when they design their servers. The SC 430 is reasonably quiet (Dell based it on their Precision platform), but the other two, the 830 and 840, are just loud enough to be annoying.

My first thought was that I might be able to control the fan speed, but fan speed is firmly under the control of the BMC (Baseboard Management Controller); I can’t control it from software. I determined after some Googling that most people solve the problem by replacing the system fans with slower, quieter, models. Unfortunately, I also found that the slower fans often trigger the “Lower Critical Threshold” (they spin too slow), causing the BMC to spin them up, which gets me back to the noise problem I had in the first place.

Of course, because this is the Internet and I’m not the only hacker who likes to play with hardware, someone else had already solved this problem.

TLDR; the thresholds in used by the BMC can only be changed by hacking the BMC firmware update package. I downloaded that guy’s Python script and ran it on my PE 830. The script was able to parse the BMC firmware update for the PE 830, so I went ahead and ordered a new fan.


Now, before I could replace the fan, I had to deal with Dell’s custom pinout (because their 4-pin arrangement is clearly superior to the standard 4-pin arrangement that carries the exact same signals). This is already documented in several places on the web, but just to get it up here one more time:

Signal Dell Color Standard Color
PWM Blue Blue
RPM Yellow Green
+12V Red Yellow
Ground Black Black

I lopped the connector off of the original fan (at least it’s not a non-standard pinout on a standard connector). I couldn’t shove the stranded wire into the new fan’s connector, so I soldered some solid wire from a bit of spare CAT 6 cable onto the leads. I pushed the wire all the way through the connector and bent it over, which should keep everything in place.


Before clamping and soldering.


I plugged it in and started the machine. Good news: it was nearly silent. Bad news: loading the machine runs the core temp up to 60C (Intel says to keep it below 63). The fan I bought pushes 74 CFM at full speed; the OEM fan managed 150 CFM. That wouldn’t be a problem, except that Dell was cheap on all of their tower chassis in the mid-2000s and made the back case fan do double duty as the CPU fan.

I ran an experiment to try and determine how fast my fan needed to run to be effective. I wrote a script to collect fan speed and CPU temperature every 2 seconds. After collecting 5-10 minutes of data at idle, I started a program that fully loaded the CPU for several minutes, and continued collecting data until the system returned to a stable idle state. I ran this experiment on the PE 830 (Pentium D 940, 3.2 GHz, 130W TDP) and the PE 840 (Core 2 Duo E6400, 2.13 GHz, 65W TDP).

As it turns out, the BMC isn’t intelligent enough to vary the speed of the fan based on CPU temperature. On the new fan, it drops the fan speed in increments of 75 RPM until it gets below the threshold, then spins it back up to a much higher speed and repeats the process. This cycle is apparently unaffected by CPU temperature:

(Note: The label on the X-axis should read “Time (mm:ss)”, but I’m too tired to go back and change it now. Click on the plots for full-sized versions.)


The CPU temp peaks around 60 C. The heating seems to be fastest when the fan runs below 1000 RPM.

Now, even at high speeds, the new fan is very quiet, so noise is no longer a problem. However, the fan doesn’t cool the CPU effectively at speeds below about 1000 RPM, causing the CPU to heat very quickly if it’s loaded at the lower part of the fan’s cycle. This problem was easy enough to solve though. Instead of hacking BMC firmware hack to lower the speed threshold, I hacked the firmware to raise the threshold to 1000 RPM (the Python script already allowed this).


With the threshold set at just below 1000 RPM, the fan speed kicks back up before the CPU temp can rise too far.

For reference, here’s a plot of the fan behavior with the stock fan and stock firmware. The fan speed really doesn’t vary at all, regardless of CPU temperature.


I also tested out the new fan in the PE 840, and gathered similar results. CPU temperature still doesn’t factor into fan speed.



Fan speed is flat. CPU temp isn’t.

I haven’t bought a second fan for the PE 840 yet, and I’m not sure if I will. For some reason, it doesn’t seem as loud as the 830, even though both run the fan at the same speed.

I really wish I knew why the firmware keeps trying to lower the speed on the new fan. My best guess is that Dell’s PWM fans don’t work quite the same way as standard PWM fans (because re-inventing PWM obviously makes sense…).

Next Gen Gaming Consoles

(This post is largely conjecture and somewhat uninformed; I spent a few minutes reading parts of Anandtech’s comparison and not much else.)

We now know the hardware details for the next gen gaming consoles. In short, they look a lot more like PCs than any console to date (at least so far as I’m aware*). Specifically, I’m talking about the PlayStation 4 and the Xbox One.

AMD won the contract to supply the processor and graphics for both systems, and I expect that they’re nearly identical architecturally. The two biggest differences are that the PS4 has almost 3x the memory bandwidth of the Xbox One and the PS4 also has half again as many compute units in the GPU. Other than that, both machines use an AMD Jaguar chip with 8 cores. I’m pretty sure these 8 cores are in a 4-module arrangement, which means these consoles really only have 4 cores worth of floating point hardware, though that’s not clear from only 5 minutes of googling.

I think the PS4 will have a distinct performance advantage, but I’m not sure how much that’s going to matter. Both systems are far more powerful than their predecessors, and neither system should have any problem driving a 1080 display at extreme detail. For the immediate future, the performance difference may not even be noticeable. Of course, the software environments will be more optimized than on a standard PC, so I expect the performance on both systems to stellar.

I think the sales and marketing approach is going to matter a lot to the success of these systems. I believe Microsoft and Sony have rather different ideas about how people will use gaming consoles as we move forward. Microsoft is going for more of an “entertainment console” approach while Sony is pursuing a more “spare no expense” high-performance gaming rig. I don’t know what’s going to appeal to consumers. Microsoft has started off on the wrong foot by threatening to require a persistent Internet connection and trying to kill off the used game market. We may see a split between people who use their systems primarily for gaming vs. those who use their system primarily for entertainment.

* Apparently the first Xbox was x86 based.

ThinkPad T430s Review (Part 1)

Part 2

I decided at the end of last summer that it was finally time to replace my Dell Latitude D820. I looked at 14″ business laptops from both Dell and Lenovo. I started looking at the T430s, saw a good deal, and jumped on it.

The T430s is thin, light, and sturdy. The build quality on this thing is great. I regularly carry it around open by the corner of the palm rest, and I don’t detect any flex. At just under an inch thick (0.83in – 1.02in), it’s easy to move it around and get it in and out of bags one-handed. The screen is a tad flimsy (it’s maybe a quarter of an inch thick), but this hasn’t really been a problem. The hinges, of course, are indestructible.

Probably the most controversial thing about the Tx30 line is the new keyboard. Lenovo’s recently been switching all of their laptops to a chicklet/island style keyboard. This change rubs some veteran ThinkPad users the wrong way, as ThinkPads are known for their awesome keyboards. I never spent much time using a ThinkPad before this machine, so I can’t compare with the old keyboards. I can say that the keyboard on my ThinkPad is the best keyboard I own. The keys have a fair amount of travel, and their response is satisfyingly crisp. I believe it’s the best laptop keyboard available. Consumer laptops all have mushy keyboards (I recently assessed the state of keyboards at Best Buy), and the keyboards on Dell’s business laptops, while much better than the consumer laptops, still leave something to be desired.

My only keyboard complaints are the Caps Lock, Page Down, and Page Up keys. The Caps Lock key doesn’t have an associated LED, and I regularly find myself brushing it and then wondering if I hit it hard enough to activate Caps Lock. Without an indicator, I’m forced to guess and check. The Page Up/Down keys are tiny and are nestled in above the left and right arrow keys. The arrow key matrix is the only place on the keyboard without much gap between the keys, and I sometimes have problems hitting Page Up when I really meant to hit left arrow.

On a related note, I decided to get the backlight option on the keyboard, and I must say that that was a very good choice. This option pays for itself the first time you try to work in the dark. I hit Fn+Space and I can see my keyboard again. The backlight has two brightness settings. I can change the current setting by tapping Fn+Space. Tapping Fn+Space a third time turns off the backlight and turns on the ThinkLight, which is another great feature (albeit one that’s apparently disappearing on the next generation).

One disadvantage of having a compact 14″ laptop is the loss of depth. The keyboard is roomy, but what I get in keyboard space I give up in touchpad arrangement. The arrangement gives priority to the TrackPoint nub/buttons, which I don’t often use. I prefer to use the touchpad and the Left/Right buttons below the touchpad. Unfortunately, the buttons aren’t very tall (1/3″?) and are situated right on the edge of the chassis. They’re easy to miss.

The touchpad itself is rather large. The size is nice, because it basically gives me more resolution for small gestures. Unfortunately, the size of the touchpad and the lack of a physical delimiter on the pad means that I tend to palm the touchpad when typing. A few generations ago (Tx00?) Lenovo started using textured touchpads. It took some getting used to, but I actually like having the little bumps.

This post pretty well covers my thoughts on the keyboard and other input devices. I’ll talk about other features and give more thoughts in a future post.

Part 2

CCSVLIB 20130307

Update 3/7/13: I forgot do a collision search on Google for “csvlib” before releasing this. Apparenlty everyone calls their CSV handling utilities package “csvlib”. I’ve renamed mine CCSVLIB.

I keep finding myself in situations where I’ve got data in spreadsheet form but I want to perform some analysis or transformation on the data beyond my capabilities with LibreOffice Calc or Microsoft Excel. Then sometimes, I want to go the opposite direction, and have my software do output in a format that I can convert into a spreadsheet. Fortunately, both Calc and Excel can read and write comma-separated-values (CSV) files. CSV files are nice to work with. They’re plain text and thus easy to read or write from my own software.

Well, easy in theory. In practice, CSVs that come from different sources may use different formats (quotes vs. no quotes vs. optional quotes, are commas allowed in the data, etc.), which makes reading CSVs a little too painful for use in small one-off scripts and programs. In addition to inconsistent formats, the code necessary to correctly parse a CSV file is often larger than the code that performs whatever analysis or transformation I want.

I deal with CSV files enough that I decided to write my own CSV parsing library.  I suppose I could have searched the Internet for someone else’s solution, but I needed a project, and I think parsing is fun. I also decided that if I was going to implement a CSV parsing library then I was going to do it right. I started by looking for a standard for the CSV format and found RFC 4180. After a couple of hours at my keyboard, I had a working parser and a decent data structure for pulling data from RFC 4180 CSVs into memory. My library came up in conversation with my supervisor a few days back (I’m a TA for CS 115 at UK), and she mentioned that she wanted a copy. I decided I wanted to release the software, so I added the capability to write CSV files, polished the API, and wrote some documentation.

What makes CCSVLIB a better choice than any of the other CSV parsing libraries for C/C++? Objectively speaking, nothing, or at least nothing that I know of. I haven’t taken a close look at any of the other stuff that’s available. I can say, based on a cursory Google search, that there aren’t many implementations of RFC 4180. CCSVLIB implements RFC 4180 (well, at least mostly), so it should be able to consume most sane CSV files. Also, CCSVLIB is simple, short, and well documented. The current version is 1051 lines of C, about 400 of which are comments.

I’m releasing CCSVLIB under the BSD license. You can download the source tarball from the link below or from the software page. Documentation and an example are included in the download.


KOAP 20130205

In an effort to sustain momentum going into the semester, I was tentatively scheduled to give a talk about KOAP for our research group Tuesday afternoon. KAOP is my tool for developing OpenCL applications using the C host API. I took the opportunity yesterday afternoon to change a few of the things that were bugging me about KOAP.

First, a little bit about how KOAP works internally (well, how it worked until Tuesday). KOAP takes an input file containing C code, OpenCL code, and KOAP directives as input. KOAP expands the directives into OpenCL API calls and combines all of the OpenCL code into a string for compilation at runtime. KOAP does not use formal parsing methods. The parsing takes place over multiple passes and is very ad-hoc. KOAP reads the input into a single string. KOAP processes comments and KOAP includes (like C preprocessor includes) in this first step. KOAP then separates the OpenCL source from the C source and breaks the source strings into double-ended queues (STL deque) of strings, using newline characters as delimiters. KOAP expands directives one line at a time, building a deque of output lines as it goes.

Why STL deques you ask? At one point, that was the only STL container that supported the methods I needed (or thought I needed). My first modification Tuesday was to replace all deques with STL vectors. Vectors support all of the needed operations, and are better suited to the problem (I’m mostly using the element access operator [] and the push_back method). KOAP has been released for over two years now, and I’ve spent two years thinking it was dumb that KOAP used double-ended queues. That’s not bugging me anymore.

My other modification is actually user visible. KOAP understands a handful of arguments for things like setting the flags passed to the OpenCL compiler, setting the device type to be used (OpenCL works on CPUs, GPUs, and other accelerators), and a few other things. All of the command line arguments came in pairs (-argname argument). I had written a very dumb bit of code to parse the command line arugments and set the necessary internal flags. My old parser required that the KOAP file for processing be the last argument, and would only process one KOAP file. I’ve rewritten the argument parser to be more general. The new parser is smarter about how it parses the arguments and accepts as many KOAP input files as you wish to give it.

The queues and the argument parser were the two things that bugged me the most about KOAP. Now that they’re fixed, I’m reasonably satisfied with how KOAP is structured internally. I’m not quite to the point of being proud of the codebase, but at least now there’s nothing in KOAP that I find embarrassing.

We have a MakerGear M2


The research group I work with ordered a MakerGear M2 back in November, and it finally arrived yesterday. My lab-mates and I took the afternoon to pull it out of the box and become acquainted with it. We ordered the pre-assembled package, so this was mostly a plug-and-play operation.

I arrived in the lab shortly after the unboxing, so I can’t describe the packaging in great detail. I’m told that the box was covered in “fragile” labels, and that the zip ties on the printer were color coded; red for ties that should come off and black for those that are permanent. MakerGear uses high-end chocolates are for packing inspection tokens. I can say from experience that those are very good. Before shipping the assembled printer, the folks at MakerGear printed two test patterns, a bracelet and a gorilla head; both were shipped with the printer.

bracelet bigfoot

We started by fiddling with the motion on the head and the bed. Before printing anything, we brought the head and the bed up to printing temps (185 C and 60 C respectively, for PLA) and ran some filament to clean the head. Finding the right calibration settings took three or four attempts at printing something. Somewhere in the process of learning how to manipulate the machine, we managed to move the bed to the positive limit in Y and the power/sensor cables snagged on the frame. The power connector simply unplugged. Unfortunately, the sensor wire snapped at the solder connection inside its connector. After half an hour of negotiating with the connector housing, we were able to extract the metal contact and re-solder the connection. One more round of leveling and we began printing a Companion Cube. We’ve apparently got some issues with our configuration for the skirt, and I think we probably need to adjust the head clearance, but our first printing looks pretty reasonable.

printing cube

I’m certain I’ll have more to say about our printer in the coming days/weeks. It’s an interesting device, and I expect it to be a good toy (or distraction).