Upgrading an ASUS P5E3 Motherboard to USB 3.0

Seems like upgrading an older motherboard, like my fairly old ASUS P5E3 Premium/WiFi-AP @n, to USB 3.0, is not as straightforward as it may seem. At least not if you want to understand the limitations.

Why USB 3.0 and not USB 3.1? Well largely because 3.1 is almost certainly going to have BIOS issues in older motherboards, due to chipset limitations. I would advise to steer clear of an upgrade to USB 3.1 unless you know your BIOS has been updated to handle it (if required) and your chipset supports the spec fully.

My particular motherboard has 12 USB 2.0 ports, with 6 on the rear panel and 6 available through motherboard header pins. It’s an X49 chipset (Intel ICH9).

Since we’re not interested in throttling down our USB 3.0 ports (at least as little as possible), we’ll be looking at plugging directly into the PCIe (PCI express) bus, using a card such as this:

The card shown above has 2 rear USB 3.0 ports and a 20 pin internal header to connect a further 2 USB 3.0 ports internally, such as for a USB 3.0 front-panel option, should you decide to add one.

The USB 3.0 spec calls for more power (900 mA per port) than can be supplied by the PCIe bus, so the Molex/Amp power connector on the card is a must in order to reach specification. A standard ATX PSU should be able to supply up to 4A on such a connector.

It is also a PCIe v2.0 card which should fall back to PCIe v1.1 if it finds itself plugged into such a slot.

Here’s another card that seems to tick the right boxes, although not having tried it, I’m not sure whether it installs without needing to fiddle with alternative driver software, which can sometimes be troublesome:

The one thing that we need to understand is that a single USB 3.0 port can theoretically consume 5 Gb/s of bandwidth at burst, so anything less than that on the path between your USB device and RAM means a drop in throughput.

PCIe slots, on older motherboards, tend to come in PCIe version 1.1, or, if you’re luckier, PCIe version 2.0 flavours, or a mix of both.

The difference is that PCIe 1.1 runs at a maximum burst rate of 2.5 Gb/s, whilst PCIe 2.0 bursts at twice the speed, meaning up to 5.0 Gb/s. So, plugging something into a PCIe 1.1 slot would immediately halve your throughput.

Looking at my particular example motherboard’s PCI slots, we see ….


Assuming you’re not otherwise using all the available PCIe v2.0 slots (like for example, using two video cards in an SLI configuration, or similar), you might be lucky enough to have a blue slot available for a full 5.0 Gb/s of throughput.

Yes, we’re going to be plugging a PCIe v2.0 x1 (single PCIe lane) card into a PCIe x16 (sixteen lane) PCIe v2.0 slot, which could be construed to be wasteful, if you have better uses for that slot’s throughput.

As noted, most USB 3.0 PCIe expansion cards are x1. That is to say that they have a single PCIe lane and since the 5.0 Gb/s transfer figure is per lane, we can’t hope to get anymore than that from however many USB ports are going to be attached to the back of this thing. All those USB ports have to share the available bandwidth. This includes any USB 3.0 hubs that you cascade off the back of this connection.

To support two USB ports at 5.0 Gb/s each, simultaneously, you’d need an x2 PCIe v 2.0 card. I have seen at least one 8 lane (x8) card which feeds 8 USB ports, but nowhere to buy it from, so no immediate way to judge the economics of such an upgrade. Here’s a link to that particular card.


So, if you’re fairly serious about the upgrade, then that’s the way to go, but it could be argued that you should just upgrade your motherboard/PC.

So, we plug our 4 port PCIe v2.0 x1 USB 3.0 expansion card into a PCIe v2.0 x16 slot to get the full 5.0 Gb/s, which will be shared amongst the 4 ports (in my example case) and that’s that, happy in the knowledge that we did the best we can do, for the money.

Finally, a word of warning. I upgraded another PC, with a different USB card to the one shown here and ended up smoking a USB 2.0 gaming keyboard and two high capacity USB 3.0 flash drives that were plugged directly into the back of the card. I have no idea why. Everything checked out, in terms of installation. I have yet to meter the voltage coming out of those ports, but I’ll get around to it soon.

Also plugged into that card was a USB 3.0 cable, leading to a USB 3.0 hub. The hub continues to work without issues and has a 6 mixed USB 3/2 devices attached to it, along with a chained USB 2.0 hub.

Reading through the user reviews, on the card I bought, it seems mine was not an isolated case. Out of curiosity, I read the critical reviews on other cards and found that this was not an isolated case.

Right now, I’d be tempted to err on the side of caution and just plug USB 3.0 cables, from any of these expansion cards, directly into a USB 3.0 hub and then use the available connections from there (remembering always that you’re sharing that 5.0 Gb/s bandwidth). FUD? Maybe. The voltmeter will tell me more when I  get to it.

Posted under Computing Tags: , ,

Blog Revival

So, I thought I’d try and revive this blog, so I re-imported form an old export backup. Just got to locate all the images that go with the text and review the whole thing. Maybe reactivate the old plug-ins too.

Update: Images done, Review underway.

Posted under Wordpress Tags:

Hot Swapping Desktop Drives in Windows

Some notes on the subject of hot-swapping, or hot-plugging as it is otherwise known, SATA desktop drives in a Windows environment. Hopefully this article will help you to understand what you can and can’t (reasonably) do with this technology.


An eSATA (Right) to SATA (Left) cable.Some motherboards have eSATA available on the rear I/O panel, or on a motherboard header and the appropriate cable and PCI bracket fitted with an eSATA connector. Essentially these make available a SATA connection, but using a more robust connector arrangement and slightly different electrical specifications to a normal SATA port.

eSATA provides a slightly higher voltage output signal and a greater input signal voltage sensitivity than normal SATA. In both cases, the protocol and logic signalling is the same for both SATA and eSATA.

The eSATA connector is designed in such a way that it will take many more insertions and extractions, before it degrades. Just as importantly, it helps to eliminate ESD (Electrostatic Discharge) and EMI (Electromagnetic Interference), both of which can otherwise lead to problems. It does this by recessing and appropriately staggering the contacts and by specifying shielded cables for connections. With staggered and recessed contacts, there is less chance of a short, or misconnection due to the connectors mating slightly askew. Usually the ground pin makes first contact on insertion, then the signal pins, then the power pins.

The maximum run of eSATA cables is specified as 2 metres.

In a simple single-disk eSATA scenario, the power to the external disk drive would normally also be external and divorced from any other drives, thereby having no adverse effects on anything other than the drive it powered.

Thus eSATA is ideal for hot-swapping a desktop drive, in much the same way as you might connect/disconnect an external USB hard drive, using cable-only arrangements. However, some additional requirements need to be met first, at the software level.

Firstly, your BIOS must support AHCI (Advanced Host Controller Interface) mode and this must be enabled. Secondly, your Windows operating system needs to be Vista, or later and it must already have the AHCI drivers installed on the SATA controller on which you are going to perform disk hot-swaps. If this controller is where your boot disk is, then you must already have pre-installed the Windows AHCI driver, before enabling AHCI mode in the BIOS. Failure to do so, means your system won’t boot.

Windows Device Manager by Connection - ICH9R Chipset SATA AHCI Controller and Drives

It is also worthwhile turning off write-caching and flagging the drive for quick removal. You will find these options under device properties for the drive.

Windows Disk Device Properties - Quick Removal Option

It also won’t do any harm to turn off System Protection on any drive that is going to get regularly swapped.

Windows System Properties - System Protection per Drive Option

Windows used to also create paging files on any mounted local drives, if the default option to Automatically Manage Paging Files for All Drives was set, but under Windows 7 is does not appear to do so anymore – so no worries there, except for the boot volume which will always have a paging file unless you’ve explicitly turned it off, but since we can’t hot-swap that in a non-RAID scenario then it’s a non-issue in any case.

Having satisfied these requirements, you can perform a hot-swap, always remembering that you cannot hot-plug your non-RAID boot disk and hope to get away with it.

Note that since the eSATA connected disk drive is considered a non-removable local drive, per se, there are no eject, or dismount options if you right-click, or select, the drive in Windows Explorer, but there should still be a Safely Remove Hardware and Eject Media option in the Windows taskbar tray. If not, you can open the Control Panel, Devices and Printers window, locate the drive, right-click and you should find an eject option there.

The most secure way to do remove the drive is to select the Eject option in Windows, then turn off the drive power once Windows notifies you that the device may be removed, wait for the drive to spin down and pull it out. Once the drive loses power, the heads should be safely parked and you minimise any risks associated with handling the drive. Needless to say, you should observe normal precautions against static discharges if the drive is being handled without any sort of protective outer casing. For a barebones drive, just avoid fingering the PCB, pins and anything that looks fragile.

If you fail to eject the drive, Windows may take a few seconds (perhaps even as much as a minute) to decide that the drive has been powered down/removed, but your system will not hang during this time. You will hear the hardware device removal sounds if you have those configured and the drive will eventually disappear from Windows Explorer. Doing it this way runs the risk of some data loss.

It’s also not strictly necessary to turn off the power beforehand, either, although try to ensure that the drive is not in mid-access, of course. Worth noting that most drives are going to be drawing over half an amp on the 5 volt line and around three quarters of an amp on the 12 volt line. That’s enough to cause arcing if the connect/disconnect operations are not cleanly done when the power is left on.

Ultimately, though, there is no real reason not to Safely Remove Hardware and Eject Media, power-down and remove. The main objective is to avoid rebooting the machine, not disrupting the Windows session and keeping any data safe, which is definitely something that we can achieve with this hot-swap operation.

To re-insert, reconnect the drive and power it up. If you scan the disk for error, there will be none found, unless the errors were there previously.

Clearly, if the drive is in use by applications, or services, when it’s removed, then these should be shut-down cleanly beforehand, to avoid any issues.

Some will say (and they are probably right) that this is not a true hot-swap, but in order to do that the reality is that the drive has to be part of a redundant RAID array, so that the upper levels of software do not actually see the drive being swapped out at all, but this short article is not exploring those options and how hardware RAID behaviour compares to software RAID in these scenarios – hence why we are talking about ”Desktop” drives. In reality, the procedures being discussed here are probably more aptly referred to as hot-plugging.


Well SATA is essentially the same as eSATA except it is not designed to be hot-plugged in quite so cavalier a fashion. The protocol and logic signalling is the same and the differences are in the physical connectors and a minor change in the electrical specification (although see the section on chipsets, below). You are limited to shorter cables, the connectors are less robust and will degrade and wear more quickly with constant plugging/unplugging and they need care and precision to avoid skewing them – leading to short circuits, spikes and damage – when making, or breaking, a connection.

That said, you can definitely hot-plug SATA and if you are using a passive backplane, such as the Sharkoon Quickport Internal 3 x 3.5” bay (fits in 2x 5.25” bays) shown below, to ensure the connections are made squarely and the drive electronics protected, then you can rest much easier. The same is true of various external caddies that are available out there, although these tend (sensibly) to be eSATA, rather than SATA.

Sharkoon 1 x 3.5” SATA 6 Gb/s Drive Bay

Sharkoon SATA Quickport Multi has 1 x 2.5” and 1 x 3.5” Bays so an SSD can be fitted alongside a conventional drive.

However, with passive multi-disk internal backplanes, there is still the issue of power to consider.

Power to an passive backplane should ideally be provided by separate cables from the common power supply unit (PSU) for each disk in the backplane in order to minimise problems. Disks should never be inserted simultaneously as there will be a relatively large start-up current drawn and if your PSU is not up to it, some live spinning disks may be adversely affected. If the backplane has separate power switches for each disk, there is certainly no harm in using these when hot-plugging, after insertion, or before removal after the appropriate software removal procedures have been executed. Above all be sure that your PSU is decently oversized and has plenty of SATA power cables available.

There are, of course, some active backplanes out there, although these tend to be more for 2.5” SAS disks than 3.5” SATA drives. Active backplanes have additional electronics which help to manage (soft-start) the power lines and eliminate ESD by ensuring that there is a solid connection before firing-up. These are more suited to use in a production RAID server, due to their significantly higher cost. They also usually come with additional monitoring connectors that form part of the SAS specification for RAID+SAS/SATA controllers to use/act upon.


Finally, I’ve seen some talk of chipsets with some being said to work with hot-plugging and others not (or not so well). This is primarily due to the device removal/insertion detection electronics provided by different chipsets. Some provide partial detection, others full and some earlier chipsets none at all. At the same time, some of those chipsets provide plain SATA ports and others provide both eSATA and SATA, again potentially with differing levels of device insertion/removal. The chipset also has to be AHCI compatible in order for the BIOS to offer it. The Intel ICH9R, for example works well in all these respects. For other chipsets, it’s worth checking the manufacturers detailed specs to find out what is and what isn’t supported.

Posted under Computing,Uncategorised Tags: , , , , , , , ,

ISA Server 2004 HTTP Result Code 10051

This is an article regarding ISA Server 2004 HTTP status code 10051 “A socket operation was attempted to an unreachable network” in a simple ISA HTTP publishing scenario. In an external browser, you may also see the more generic IIS error code 500 “Internal Server Error”. Note that a status code of 10051 can be quite generic and may be caused by a number of different scenarios.

This one was driving me a little insane, mostly because, on the surface, it was a straightforward, simple scenario. I was using the ZoneEdit Failover service to hit the published URL of a simple HTML page on the default IIS site, every 10 to 15 minutes and it was failing almost every other time.

ISA Log Monitor

I set up an appropriate ISA log monitor filter to focus solely on the external failing test IP and analysed the results.

For a successful connection attempt, the ISA firewall and proxy log was showing the following actions:

  • Initiated Connection
  • Allowed Connection
  • Closed Connection (0x80074e20 FWX_E_GRACEFUL_SHUTDOWN)

For a failed connection attempt, the ISA firewall and proxy log was showing the following actions:

  • Initiated Connection
  • Closed Connection (0x80074e21 FWX_E_ABORTIVE_SHUTDOWN)
  • Denied Connection (0xc0040017 FWX_E_TCP_NOT_SYN_PACKET_DROPPED)
  • Invalid connection attempt (HTTP status code 10051 “A socket operation was attempted to an unreachable network”)

In the failed connection attempts, the FWX_E_TCP_NOT_SYN_PACKET_DROPPED result code was clearly being generated due to the FWX_E_ABORTIVE_SHUTDOWN result, so this could be ignored. The timing of the log messages varied, so sometimes the Invalid connection error came before the TCP_NOT_SYN and sometimes the Closed Connection came after the TCP_NOT_SYN. In any case, these were always grouped together and interspersed by successful connection attempts.

FWX_E_TCP_NOT_SYN_PACKET_DROPPED can be generated under a variety of more complex circumstances, but usually indicates a routing or NAT issue. Sometimes it is generated when an external hacker is searching for any vulnerabilities on your external IP. However, there was a clue here that helped turn on the light bulb for tracking down the eventual issue.

Very occasionally, I would see two simultaneously successful connection attempts and at other times I would see two, or three, simultaneous connection failure attempts. So a quick glance might mistake these failures as random, but the overriding pattern was that of alternating success and failure.

ISA Listener

The ISA listener that was used here was a straightforward HTTP (Port 80) only listener, listening on the external network (single IP), with Integrated Authentication.

ISA Listener Configuration

The above image is from a Spanish installation of ISA server, but should be recognisable to most.

ISA Publishing Rule

The ISA publishing rule, was equally straightforward, using HTTP only and the corresponding listener shown in the previous section:

ISA Server Publishing Rule

IIS Configuration

The IIS configuration, for this page, was also straightforward. In the Default web site, the IIS properties of the HTML document (page) were selected and the file tab was set to allow only read access permissions and then, in the authentication methods tab, to allow anonymous access with no authentication methods selected:


The IIS default web site was bound to the appropriate interface and headers:



In the test configuration I was running a split DNS, with an A record for host.domain.tld pointing to the external ISA public IP and the internal DNS resolving the same FQDN to the internal IP for the IIS default web site. The following image shows the internal DNS domain, with suitable occlusions and highlights:


The DNS server was bound only to a single IP address, even though the test configuration had two other local NICS (the A records of which can be seen above), in addition to the public NIC:


The Stage is Set: Problems

Now, normally, the above configuration should have worked out of the box, without problems. However, as you have rightly guessed, it did not; it created the error scenario discussed in the opening section. So clearly there was something that I was not immediately taking into account.

I started by ruling out any problems with the ZoneEdit failover service, basically by studying the ISA log entries and deciding that although the HHTP GET request that was being made was a little unusual, in that it was a 206 partial request with associated Req ID and Range=0-499 parameters, it was in fact correct and supported by the ISA Server. I should also note, at this point, that the ISA Server cache was disabled, in order to remove this as a possible problem source.

Next I thought I should rule out any routing issues, given that the connection attempt was intermittently failing. So I checked the ISA networks and network rules to ensure that the internal network was correctly set to NAT to the External and anything else that might have been a source of problems, such as ensuring that the correct range of IPS were assigned to each network.

I checked local access to the default web site test page, that was being hit, with multiple browsers and checked name resolution with nslookup. Everything checked-out fine.

Having drawn a blank there, it was arse head-scratching time.

The Solution

That’s when i came to the inescapable conclusion that the problem had to be ISA server’s access to the website and the obvious candidate for that was the internal server name being used to access the default web site.

So I simply changed this:


To this, in the ISA publishing rule:


And suddenly everything started working.

Post Mortem

The only question remaining was why should an IP address work where an FQDN that was correctly resolving to the same IP each time did not?

That’s when it hit me that the culprit had to be WINS. I had been using nslookup to test the name resolution of host.domain.tld internally and it was always returning the same (correct) IP address, as there was only a single A record for the hostname host defined in the DNS primary zone for domain.tld.

I re-checked the DNS settings and sure enough, the option to use WINS was checked:


I also had multiple local NICS configured on the test ISA server and WINS had a different idea of what the primary local IP for host.domain.tld was. Worse, what it was, according to WINS, varied depending on how the server booted up.

Now I don’t know for sure how ISA server was getting fed this incorrect local IP (bear in mind that the IIS website was not bound to it), but it was clearly alternating between one IP and another on a round-robin basis.


Aside from the obvious fix of using an IP address in the ISA web/server publishing rule, instead of an FQDN, there are some other possible fixes that I could consider:

  1. Bind the IIS website to all the possible IPs that the FQDN could resolve to and run httpcfg query iplisten to see if IIS is listening to the appropriate ports on each IP and if not, run httpcfg set iplisten –ip <ip-address>[:<port>] to add them as required.

    In my case this is not an option as I have other third-party services bound to port 80 on these secondary IPs.

  2. Deselect WINS in the DNS zone for the domain settings.
  3. Change the TCP/IP properties of the additional NICS to not register their addresses in DNS and to not use NetBIOS, or use only NetBIOS over TCP/IP.
  4. Remove WINS entirely. I decided that this was still unpractical, since Exchange 2003, OWA, some third-party apps, Network Neighbourhood, Samba and some mobile devices still rely on NetBIOS name resolution.
  5. Use an IP address in the ISA server publishing rule in lieu of an FQDN.

    I really wanted to avoid this, as a solution, as I fear I may run into problems later when using SSL if I need to match headers to particular certificate names.

In the end, I thought it was worthwhile trying to take control of what went into DNS and WINS by using a combination of the approaches in points 2 and 3 above. So I ended up doing the following:

  1. DNS bound only to the primary local IP. All DHCP responses send this address out as the primary and only DNS server and all local networks on all NICS would be allowed to route to it.
  2. The DNS zone for the primary domain is set to not use WINS.
  3. Set only the primary local NIC to use WINS. All other NICS on this machine do not use WINS.
  4. Set only the primary local NIC to register with DNS. All other NICS on this machine do not register with DNS.
  5. All NICS have NetBIOS over TCP/IP enabled.
  6. Explicitly define the host name in DNS with an appropriate A Record.
  7. All NICs (including the WAN external interface) use the single local primary address for the internal DNS server and all have statically assigned parameters.

I should note that the configuration that I am using here is actually SBS (Windows Server 2003, Exchange 2003, ISA Server 2004) with a NIC for the internal LAN, another for the External WAN and a couple of Loopback Adapters for binding other services against. It also has a currently disabled WLAN adapter, which may be enabled at some future date.


In many ways, this was a simple error with a simple fix. However, I think that hides the whole story and over-simplifies the potential issues involved. It is still somewhat of a mystery to me why ISA server’s name resolution of an FQDN did not act in the same manner that nslookup did, but clearly I made the mistake of thinking that the existing configuration had complete control over local name resolution, when in fact it appears that it did not.

Whether I have now and whether, or not, further issues will now arise, as a result of these changes, remains to be seen.

Posted under Computing,Uncategorised Tags: , , ,

Pitted Metal Script-Fu Script

I thought I would pick an online tutorial to follow and see how easy it was to create a Script-Fu Scheme script for GIMP. Turns out it took a little while longer than I expected. I picked the tutorial by Draconian on gimpchat.com, entitled How to Make Pitted Metal – Metal Wurx – Part XV. Below is a sample of what the script produces:



And here is the script:


  1. Draconian Tutorial on Pitted Metal at gimpchat.com
  2. Sandy Textures at cgtextures.com
  3. GIMP GMIC Plug-in
  4. Python Bevel script by “dd” in GIMP script and plug-in repository


Posted under Computing Tags: , , , , ,

WordPress Jetpack 1.4.2 Comments 403 Forbidden Error

Having problems with saving comments on WordPress posts after updating to Jetpack 1.4.2 and enabling the comment feature. A 403 Forbidden error is being generated. Can’t see anything obvious, may take some time to fix.

Update: OK, so I deactivated the comment feature in Jetpack (Click the “Learn More” button in the Jetpack feature page and the Deactivate button appears beside it) and comments are working again.

Update: Looks like a broken link in the plugin. The “Post Comments” button links to Post Comment Button Links to jetpack.wordpress.com/jetpack-comment/ and not to …/wp-comments-post.php where it should. This appears to be because JetPack comments replaces the standard WordPress comments module, which also has the side-effect that no other comment plug-ins will work with it.

Update: More here where one poster suggests that JetPack comments will not work with Themes that have their own comment forms, though after reading all the comments on here, I don’t think I will be using JetPack comments anyway as it appears to have an issue with OpenID (which I’m hoping to implement on here soon), does not work with other comment plugins and has limited customisation.

Posted under Computing,Wordpress Tags: , , , , ,

PowerShell: Use Asynchronous Events to Capture Process Output

I wanted to wrap a PowerShell 2.0 wrapper around ffmpeg, but I was encountering certain problems dealing with the output from it, so after trying various approaches, I settled on running it as a process and grabbing the output using the ErrorDataReceived and OutputDataReceived async events.

First I tried using delegate script blocks and the Process.add_OutputDataReceived() method to handle the output, but I soon realised that the solution was going to become stupidly complex, since PowerShell was not handling setting-up a runspace for this scenario and the resultant crash was not pretty.

Then I decided to try Register-ObjectEvent. This seems to work pretty well. The bones of it look something like this:

Posted under Computing Tags: , , , , , , ,

Baking Powder and Yeast in Spanish: ‘La mal llamada levadura’

levadura-de-panaderialevaduraWell this one is a little confusing if you’re looking at recipes in Spanish, since there are so many similar terms used for yeast and baking powder (baking soda to  some, but in both cases sodium bicarbonate – a mineral salt with the chemical formula NaHCO3) and some of those terms can be used in a conflicting manner. Until someone clarifies otherwise, I have settled on the following definitions:

Yeast (a fungus – eukaryotic microorganisms):
  1. Levadura de panadería – Bread maker’s yeast.
  2. Levadura de cerveza – Beer yeast. Historically bakers sourced their bread yeast from local brewers, so there is little (if any) distinction between this term and bread making yeast. Bakers now tend to source their yeast from commercial laboratories, so the use of this term for baking may just serve to confuse. 
  3. Levadura seca – Dried yeast.
  4. Levadura en polvo – Powdered yeast (presumably also granules), but see (5) below in the Baking Powder terms list, because it really is used in both contexts.
  5. Levadura instantánea – ‘Instant’ yeast. Dehydrated yeast in powder, or granule form. Same as (4).
  6. Levadura fresca (also levadura prensada) – Fresh, or cultured yeast.
  7. Impulsor biológico – Unspecified biological catalyst, i.e. yeast, but be careful as Impulsor on its own is also used to describe a chemical catalyst – see (9) below.
    Baking Powder (a chemical – mildly alkaline mineral salt):
    1. Levadurina Royal – Royal is the brand name and they don’t make yeast, only baking powder. The actual ingredients in one of their sachets are Monocalcium Phosphate, Sodium Bicarbonate, Corn (maize) Starch and Calcium Carbonate.
    2. Levadura Royal – I’ve seen this term (mis)used as well, but the use of Royal is the key here. It must be basically baking powder and not at all yeast.
    3. Royal – Sometimes just the brand name is used and it means baking powder is the main ingredient here.
    4. Levadurina – Since the term levadurina is used with Royal in (1), we assume the same here – It’s baking powder. Actually, my Spanish aunt confirmed (under duress) that this is what levadurina is, but everyone calls it levadurala mal llamada levadura. See why we have a problem?
    5. Levadura en polvo – Well it says it on the Royal packet, so it must be so. We have found the culprit of all the confusion. The success of the Royal brand and their mis-labelling on the packet are to blame for our confusion. If you see this term used in a recipe, then you’ll either have to guess as to what is intended, or seek clarification. Generally, if it’s an empanada/empanadilla (stuffed pastry), or bizcocho (cake sponge) then it will mean baking powder. In fact, most of the time it will mean baking soda is involved, but don’t forget that pure baking soda needs a malic, or citric acid to react with to create CO2 (see (7)) to put the bubbles in your pastry.
    6. Levadura Química – A mixture of bicarbonate of soda, wheat starch and preservative additives such as disodium diphospate (E450/Na2HPO4). A common Spanish brand name is Vahiné.
    7. Gasificante – Again can be loosely used to describe sodium bicarbonate packaged with malic and tartaric acids. These will come in two separate sachets, to be mixed in different proportions dependent on the recipe. The acid will react with the bicarbonate of soda to form CO2 gas. A common brand name in Spain, is Sodas Barrachina. This has been around for many, many decades (my grandmother used to use it) and used to come as tablets in blue and white paper wrapping. Nowadays it is blue and white sachets of powder.
    8. Polvo para hornear – Literally translated: ‘ovening’ powder, or more correctly (by extension), baking powder. A term mainly used in Mexico, but they also use the Royal brand apparently.
    9. Impulsor – I have seen this term used in this category/context, but unfortunately it is also used in the context of yeast. You will see the use of the term Impulsor biológico as well as just the term on its own in this context.


    Posted under Cooking Tags: , , , , , , , , , , , ,

    PowerShell Script to Remove APIPA from Named Interface

    In a previous article entitled W7, Steam, MS Flight, Games for Windows and Failed Live Login, I outlined a temporary fix to remove the Automatic Private IP Address (APIPA) from a named interface, using NETSH, to stop GfWLive from getting confused about the interface’s primary IP and failing to Login to Live as a result. Well the temporary workaround does the job, but it is a pain to have to manually go through the steps each time, so I knocked together a quick PowerShell script to do the job for me.

    Here is the script, for anyone else interested in using it. As with any/all scripts on this site, standard disclaimers apply. All you have to do is modify the value for $interface (line #7) to match the name of your main interface (NIC) and you can then invoke it whenever you need to remove the APIPA from it.

    Note that it requires elevated privileges to actually execute the netsh interface ipv4 delete command, so you will see it prompt you for your machine’s administrator account password. Being a simple and straightforward script, it is clear, to the casual observer, why elevated privileges are being used and how – that’s one advantage of using a script for a task such as this.

    It took me a little while to figure-out how to run with elevated privileges in PS. I fairly quickly abandoned the cmd-scriplets and switched to the .NET classes System.Diagnostics.ProcessStartInfo and System.Diagnostics.Process due to niggling issues with various parameter combinations for what I wanted to do.

    Using the .NET classes, the RunAs verb on it’s own is not enough to force the created process to Run As Administrator. I suspect it only works if you’re launching a ShellExecute, which in this case we are not. I found it necessary to run as a sufficiently privileged user in order to avoid an Access Denied error.

    So this sample code doubles as a working example of how to get elevated privileges in PS for a specific operation. I would have preferred just to find a method that causes the User Account Control (UAC) dialogue to pop-up, but in the end settled for this method due to lack of time to investigate the issues further.

    Anyway, here is the script (Note: Made a couple of small fixes since I originally posted this):

    Posted under Computing Tags: , , , , , , , , , ,

    PowerShell Code Signing Script

    Some time ago I posted about an issue regarding digitally signing PowerShell scripts from within the PS Integrated Development Environment (IDE). Now I would like to post about a PowerShell script to actually automate the signing of other PS scripts and also to automatically fix/bypass the aforementioned issue by rewriting the target script file for you in Unicode Little-Endian (UTF-16) format.

    For this to work you should, of course, have a certificate that can be used for code-signing in your Current User Personal certificate store. I won’t go into how you get that, or how you install it, as that is amply covered elsewhere on the ‘net.

    First some brief notes about the script. Call it with the script you want to sign as the first parameter, for example:

    .sign.ps1 .sign.ps1

    As can be seen, it can sign itself quite happily, after which you can lock-down the execution policy with:

    Set-ExecutionPolicy -ExecutionPolicy AllSigned

    code-certBefore you auto-sign it and lock-down the execution policy, you should probably edit it to change the default thumbprint to your code-signing certificate’s own thumbprint. That’s the code on line #6, below.

    You can get the thumbprint hex value by viewing the certificate on your machine. The script will attempt to use this specific certificate for each signing. Make sure you enter this with uppercase hex letters and no spaces, or hyphens. Edit: Case now immaterial, since script converts it.

    If it does not find the certificate, either because it is not in the correct store, or because the thumbprint does not match, then it will attempt to find and use any code-signing certificate in the user’s personal certificate store. The first one it comes across will be the one the script uses instead.

    If you have several certificates you want to use, you can pass a specific thumbprint as the second parameter to the script. Your certificates should all, of course, be suitably protected with a strong password and as such a dialogue will pop-up during the execution of the script, where you can sanction the use of your certificate.

    Note that certificates should also be in the current user trusted publishers store, since that is where PS checks when deciding whether a certificate is trusted, or not, each time a script is run. i.e. It is not enough to just trust the issuing Certificate Authority (CA). By default, Windows imports code-signing certificates into the current user personal store, so it is all a bit of a mess. You could copy the code-signing certificate into the trusted publishers store and delete it from the personal store, then amend the path in the script – then again your code-signing certificate may be used for multiple purposes …

    You might also want to create a function or an alias for the script, such as this:

    function sign([String] $script) {&((split-path $profile) + "sign.ps1") $script}

    The script itself works around the issue of files needing to be in Unicode (UTF-16) Little Endian format by always attempting to convert them to the target format. It does this by copying them to a temporary file with the –encoding Unicode switch of the file system provider and then renaming the original script, renaming the converted copy to the original, and finally deleting the renamed original.

    I’ve tested this on a variety of formats and not come across any problems, although nervous types may want to comment out line 63 in the script, to leave the renamed original behind, just in case.

    Finally, the script uses the Comodo CA timestamping server, as discussed elsewhere on this Blog. You can remove this if you feel that it cannot be relied on long-term, or for whatever other reason you may have. If you do you will, of course, need to ensure that your certificates do not expire for a long period of time – at least long enough that the code can be re-signed, or that it will lapse into obsolescence anyway.

    Posted under Computing Tags: , , , , ,
    1 2 3 8