Archive for Computers and Internet

Bon Vonage

Today is our “cut-over” day from Qwest telephone service to Vonage telephone service. I’ve been a faithful “proper telephone” (POTS) customer going back ages and ages to my BBS days, but it just doesn’t make sense anymore.

Qwest service was fine, etc. If they offered a Vonage-like service for the same price, I probably would have switched services rather than carriers.

That out of the way, I’m pretty excited about Vonage! For those of you not aware of the service, it’s a Voice-over-IP (Internet) telephone service. They provide you with a “box” you plug into your home network — ie, you have to already have broadband Internet service — and then you plug a regular telephone into that box. Ta da! You now have a VoIP telephone setup!

So, why switch? Well Vonage would have you believe that you’ll save tons of money. Some people probably do. We’re actually spending about $3 more per month for the service, not including the high-speed Internet we would have paid for anyways. We use our mobile phones for long distance calls and did without any of the bells-and-whistles offered by Qwest (for added $$).

What sold me was a couple of things that Qwest could not, or would not, provide:

  1. Flat fee calling – I get unlimited local and long distance calls for a low price. This is the key selling point for lots of folks, I suspect, but not so much for me due to mobile phones.
  2. Voicemail that comes into my email as an audio file – the old answering machine is now officially shut off, and I can get my home voicemails while I’m at work (so there’s still time during the day to call folks back!)
  3. Flat fee INTERNATIONAL calling – this was a big selling point due to some family and friends over in the UK. Now we can stop buying calling cards and start calling right from the home phone. Very cool!
  4. Ability to add a second number (or even a separate/second line) from any geographic area – if we decided we wanted to add a line for fax or for a home business or something it’s as easy as doing the configuration on the website.
  5. Easy to manage through the website – their website is hella slow during the day sometimes, but it lets me review and modify just about anything about my account once I finally get in.

Ok, so that’s enough cool features for now. And I didn’t even get to the part about how you can bring your phone number with you wherever you are, whenever you move or travel, from that point on! Just plug in the box on a network and BAM, your live on the telephone network. Can you see I’m excited?

Maybe some day we’ll go to mobile-phone-only. But that day is a while off I think, since we still need to buzz people into our building, have a “shared line” for the two of us, and keep from giving out our mobile number to every vendor who requires a “home phone number” from us… Plus, Cingular (wireless) won’t send my voicemails to my email address either, so they have some catching up to do there!

Update – 2 hrs later – Whoops, I realize I left off a 6th reason that help to explain why a huge cheapskate like me would pay $3 more for their service. They throw in all the added-value services for free. Voicemail = Free, Caller ID = Free, etc, etc. Just like a mobile carrier. I pay $3/month more, but I get $20/month or more worth of features in Qwest’s pricing.

Comments (1)

Arrested Development on MSN

Arrested Development returns… to the Internet. In a pretty cool, first-of-its-kind distribution method, Arrested Development (all 53 episodes) will be made available in syndication simultaneously through two cable services (HDNet and G4) and also THROUGH MSN ON THE INTERNET! See the Reuters details.

Part of what makes it such a special arrangement is that MSN will provide access to all 53 of the episodes on-demand, rather than on a schedule. So this means you’ll be able to watch them straight-through — start to finish — if you prefer!

It’s a shame it had to be so underappreciated during its lifetime, but it’s the gift that keeps on giving… now, by being a trendsetter in distribution method!

Comments off

Google Analytics

Hmm… not sure how I feel about this. I set up for Google Analytics on my various websites a few weeks ago.

Why? Because I wanted to get the rich usage, uh, “analysis”, I guess. After a couple weeks of data collection, I’m impressed to say they give good charting and incredibly rich usage analysis. Good stuff.

So, why a blog post? Well, two reasons really:

1) Why is Google doing this and MS isn’t?
2) What is the impact of Google having all this data?

Let’s drill down.

The first one is short. I don’t have a good answer. Maybe they’re doing it because they’re ahead of the game. Maybe it’s because they are better in tune to their advertising customer’s needs. Maybe they’re making a huge mistake. I don’t know. What I do know is that like Google Sitemaps, it seems like a nobrainer way to lock-in websites concerned with search-engine optimization and advertising revenue.

The second one is a little bit longer. I was thinking of this the other morning and it got me a little bit concerned, giving Google (or MS) all of this data.

Think about it — by dropping this javascript into each webpage on the site, it is basically phoning home to Google all of the details about who is looking at your site, where they’re from, how long they spend there, what keywords they used to lead to “a sale”. 

At best, Google could use this to “know” which parts of your site are interesting to people and which parts are uninteresting… an easy way to provide even more accurate “page rank” analysis. At worst, they could use this added data to jack up adword prices for “successful” keywords (ie – those that lead to a sale on your site). Hmm.

For all I know, there’s some provision in their agreement that says they won’t use this data. In which case, neither of these is a concern. But if it’s not in there… wow, what a goldmine!

Comments off

Blogging into the empty night

I’ve quite taken to the Pearls Before Swine comic, syndicated in the PI. It’s consistently funny in a style I like. But the one today was even funnier than usual, and particularly warranting of a blog post here:

Comments off

Posting cool maps from Jodi’s Garmin Forerunner 305

As has been discussed on her blog, we got Jodi a Garmin Forerunner 305 GPS watch a few weeks ago. One of the first things we were excited about was the way you can take the data/route collected by this device and make maps with it.

So, a bit of back-story… a few months before getting the 305, I had been poking around on the Interweb to see what we could expect to be able to do with this watch. In fact, since this was before the 305’s release, we were finding out what we could do with the older generation of the watch (the Forerunner 301).

When we got the 305 and I went back out looking for these sites, one of the first things I realized is that a lot of the websites and tools out there are designed for the older Forerunner models (and, as such, didn’t seem to work quite right with the new “Training Center” software and history data from the 305).

I tried offline software that purported to “split” the HST data file from the new training center into the individual runs (the file is a monolithic data store of all the runs in your training center, with no obvious in-band way to split them out). No love. The files was considered corrupted afterward. Ok.

I tried the online version of the log splitter. Again, no luck.

Now, your mileage may vary, but at this point I finally gave up on using the HST file data and moved on to other mechanisms.

I ended up at MotionBased. There was a flyer for this online analysis service for the Forerunner in the box, so it was a logical next step. Although they seem to be having growing pains (slow performance, etc), they offer quite a cool service — and way more analysis detail than the Garmin-provided Training Center software. For instance, based on the time/date/location of your run, MotionBased will provide you information on temperature and wind-speed/direction. Wow.

Ok, so after a bit of fidgeting (loading up the agent software, etc), I was able to get the data from Jodi’s first few runs up into MotionBased’s “digest”. The maps in MotionBased were pretty cool, but not quite what I wanted. My end goal here was to get exportable/reusable route-overlay maps that Jodi could post into her blog along with her training data.

MotionBased has two export formats available for the run data. I was able to create both a standard GPX file and a KML file for each of her runs.

At this point I stumbled a bit. My goal at this point was to end up with a URL link to a Microsoft Virtual Earth map showing the overlap (which I could, presumably, link to as an IMG SRC in the blog posts). Bam. Straight into a wall I ran.

Note to Virtual Earth folks: your satellite imagery is awesome — way better than that from Google for many areas — but there’s not any ready-made ways of using it with overlays (from a GPX file, for instance)… at least not that I could find. I’d love to be proven wrong here.

So, with a resigned sigh I loaded up Google Earth. For anyone not familiar with it, this is a software bit that uses Internet back-end satellite data to let you do lots of cool mapping stuff (including very flexible overlays, etc). And since I knew I had a KML file generated for each run, I knew I’d be able to do the overlayed maps successfully.

Sigh.

So, if you’ve read this far, you probably care about these last steps, and what is the final result. So, with the backstory behind us, here were the actual steps I took and what was the end result:

Preparation:

Steps to create a map:

  • Do your run with the Forerunner 305
  • Using the MotionBased Agent, synchronize the run into your “inbox”
  • Enter all the details for the run at the MotionBased website to move this run into your “digest” (I’ll leave it to Motionbased documentation on how to do all of this)
  • Drill into the run in your digest and export the run as “KML” format to a file on your computer.
  • Open up Google Earth, and then open the KML file. At this point you’ll see Google Earth slide the world around and zoom gradually in on your run overlay. Sweet.
  • Once Google has zoomed in on your run overlay, adjust the various other layers you want in your saved image (we use “terrain”, “populated places”, “parks”, “roads”, and “geographic features”… but salt to taste).
  • Note that your overlay is listed way down at the bottom of “places” under “temporary places”. You can enable/disable your overlay easily by checking/unchecking that box.
  • Suggest to turn off the “status bar” under the “view” menu, to reduce some clutter.
  • Then when you’ve set it the way you prefer — just File->Save Image and create your JPG.
  • Jodi’s then been using BlogJet to do her posting, and creating thumbnailed views of the map images so that they’re a bit more usable as inline images in her blog posts.

    They end up looking great! Have a look at these most recent couple weeks of her Portland Marathon training postings and you can get a sense of how these maps end up looking (and how cool it is to have maps from around the country if you run while you travel on vacation!):

    In any case, I hope this post helps others who have this great new device and are struggling with how to make great maps from the data!

    Comments (4)

    Windows 5.0 and persistent storage

    Very interesting article on how Windows 5.0 uses persistent storage (by way of Bink). It answers a question I’d been wondering for a while — why does my HTC Wizard PPC phone (equivalent of the 8125) not have a back-up battery like my old Dell Axim did?

    Comments off

    Great KB article for offline files

    Ran into a problem while configuring offline files for Jodi’s laptop last weekend and thanks to KB.304624 for the fix!

    Turns out if you configure the My Documents folder for redirection to a server share, it (and all of its subfolders) will automatically end up configured for offline file. This is ok, it’s probably what you want, really. But what is bad is that YOU CAN’T UNCHECK IT.

    Yes, that’s right. If you redirect your My Documents to a file share, the “Make Available Offline” option becomes checked and greyed out/disabled. And for all of the folders inside as well!

    So in our case, we wanted to make most of it available offline, but NOT the My Music folder. The default configuration blocked making this change. But by setting the group policy setting from KB.304624, suddenly it became available to unselect!

    Not totally clear to me why the default of doing offline files means you can’t override the behavior without a policy change, but I’m just glad such an override exists!

    Comments off

    Doddsnet updates – April 2nd

    Posted a couple sets of photos to the Doddsnet site just now:

    Comments off

    Stattraq fixes

    Yesterday I suddenly got motivated to fix a couple of things that had been driving me nuts about Stattraq (the blog statistics collector/reporter that I’m using with WordPress). Randy Peterman did a great job with it, but it seems to be about dead for development — no updates since last summer.

    In any event, here are the things that were driving me nuts:

    • With “date and name based” permalinks turned on, all single-post views show as “multiple posts” or “feeds” rather than showing the actual post viewed.
    • When selecting “month” as the time period, the months are ordered strangely
    • No link-button from Wp-Admin console into Wp-Stattraq console
    • Lots and lots of search term spam
    • Extraneous favicon.ico hits (I think this might be a bug in WordPress)

    And here’s what I did to fix these:

    Most Viewed Posts fix – I did a bunch of msn and google searching here and found a bunch of different suggested fixes to this. None of them seemed to work for me, although perhaps it’s just because I didn’t apply them right. In any event, here’s what I had to do to eventually get this working:

    1. In Stattraq.php file, find the line:
      $urlRequested = $_SERVER[‘PHP_SELF’] . (isset($_SERVER[‘QUERY_STRING’]) ? “?”.$_SERVER[‘QUERY_STRING’] : ” );
    2. Change it to:
      $urlRequested = $_SERVER[‘PHP_SELF’] . $_SERVER[‘REQUEST_URI’] ;
    3. Cut it from its current location and paste it back in up above the start of the “what article am I” section:
      //Insert it here
      $urlRequested = $_SERVER[‘PHP_SELF’] . $_SERVER[‘REQUEST_URI’] ;
      // need to get the real article_id or type of server request (RSS, RDF, ATOM, Ping, etc)
      if(!isset($article_id)){

    So, why was it broken and why does this work? Here’s my rudimentary analysis. The code to do the url_to_postid stuff was all okay (some others tried to fix this part in their hacks). Yes, just about everything was right. Except for two things: When we got to the line where we did url_to_postid, the “$urlRequested” variable was empty. Whoops. And the second was that even after moving the $urlRequested assignment, it was returning just “index.php?”, which meant it failed to match any article in the url_to_postid function. So adding the REQUEST_URI to the end of the string seemed to work out here. Trial and error to the rescue! Thanks to Guu’s comment which got me thinking about the $urlRequested formatting! Unfortunately, my months-and-months of hit data can’t be “rescanned” and categorized as particular single-posts, since all of the stored URLs are “index.php?”. Yummy.

    Calendar month ordering fix – This was a simple fix to calendar.php to change the sql query data ordering so that it’d display the months from newest to oldest, regardless of year or month:

    1. Open calendar.php and navigate to function generate_month_list
    2. Find this line:
       $sqlQuery = “SELECT COUNT(access_time) as cnt, DATE_FORMAT( access_time, ‘%Y-%M’ ) as access_time, DATE_FORMAT( access_time, ‘%Y’ ) as year, DATE_FORMAT( access_time, ‘%m’ ) AS month FROM $tablestattraq WHERE ” . ($options[‘user_counts_hide_bots’] == ‘true’? ” user_agent_type=0 AND” : ” ) .” access_time BETWEEN ‘” . ($year-1) . “1201000000’ AND ‘” . ($year+1) . “0201000000’ GROUP BY access_time ORDER BY month DESC”;
    3. Change it to read thusly:
       $sqlQuery = “SELECT COUNT(access_time) as cnt, DATE_FORMAT( access_time, ‘%Y-%M’ ) as access_time, DATE_FORMAT( access_time, ‘%Y’ ) as year, DATE_FORMAT( access_time, ‘%m’ ) AS month FROM $tablestattraq WHERE ” . ($options[‘user_counts_hide_bots’] == ‘true’? ” user_agent_type=0 AND” : ” ) .” access_time BETWEEN ‘” . ($year-1) . “1201000000’ AND ‘” . ($year+1) . “0201000000’ GROUP BY access_time ORDER BY year DESC, month DESC“;

    And just like that, now it orders the month list in a way that makes sense!

    No link-button for Wp-Stattraq – This was another one with dozens of suggested fixes when I searched. Most involved editing the menu.php file, which means I’d have to reapply it each time WordPress rev’d. Blegh. I wanted something part of the Stattraq plugin itself, even if it means it won’t be a top-level menu in Wp-Admin. Thanks again to guu and his comment, I was able to do this:

    1. Open Stattraq.php and go to the very bottom of the file.
    2. Navigate up a line or two (above the “?>”)
    3. Drop in this text:
      // add the call hook the admin menu
      function AddStatTraqManagePage()
      {
       add_management_page(__(”StatTraq”), __(‘StatTraq’), 1, ‘../wp-stattraq/index.php’);
      }
      add_action(‘admin_menu’,’AddStatTraqManagePage’);

    Be sure that you’ve got the appropriate apostrophes (none of the fancy curly ones) or it’ll error out when you enable the plugin. But if it works, you’ll have a new StatTraq option under the manage menu item.

    Fix my search term spam – I guess it could be worse. I have Akismet running so I don’t have much trouble with comment spam right now. But as soon as I had applied that filtering, suddenly the mass of referrer/search-term spam bubbled to my attention. Reviewing my search term report, I was able to find the top couple search terms that were flooding my stats (and which accounted for 99% of the spam). That makes it easier… I just have to block those specific terms and it’s a poor-man’s spam filter.

    1. Open stattraq.php and find the function statTraqGetSearchPhrase
    2. Find the list of search engines (note that it’s pretty straight forward to add additional search engines to parse here — just add part of their URL and their search “key” and you’re golden).
    3. At the end of this list, add something like this:
       }else if(strpos($referrer, “aol.”) !== false || strpos($referrer, “netscape.”) !== false){
        $key = “query”;
       }
       else if(
        strpos($referrer, “bingo”) !== false ||
        strpos($referrer, “backgammon”) !== false ||
        strpos($referrer, “casino”) !== false ||
        strpos($referrer, “oyun”) !== false
        )
       {
        return null;
       }

       if($s != null && $s != ”){return $s;}

    You can add to this list ad infinitum, although I suspect it’ll start to hurt performance at some point. But, like I said, these four search terms made up 99% of my search term spam. Adding them to this list will still count it as a hit, but it will not log any data in the search term column, so it’ll keep your logs clean. Note: I also went through with myPhpAdmin and flushed out all the many rows of spam data from prior to this change.

    Extraneous favicon.ico hits – I’ll admit, this is a half-assed fix on my part, as all it does is prevent logging the hit. What I noticed while troubleshooting the various stuff above is that every page hit to the blog appears to be logging two or more additional hits for /favicon.ico. I found a little bit about what favicon.ico is (it’s the icon that, if found, sits in the IE upper left corner). Ok, so why does every hit to the blog log several hits to this in Stattraq? Well, my theory is that WordPress is actually calling the “shutdown” action (which triggers Stattraq to log a hit) multiple times per page. Seems like that’s probably broken, if so. I looked back through my Stattraq data, and it looks like the multiple hits started at some point in the last few months (ie – with 2.0 or 2.01 maybe).

    But it’s also out of my scope. I’m here to fix Stattraq, not WordPress.

    Plus, there was a super-easy fix here. We already have code in Stattraq that will ignore hits that self-identify as coming from Wp-Admin or Wp-Stattraq directory. We just need to add to that. Um… and fix it, because that wasn’t working either (hit refresh a couple times on Wp-Stattraq summary page and watch your hits increase!)

    1. Open Stattraq.php and head back up toward the top of the file
    2. Find this line (just before we do the database row insert):
      if (!strstr($_SERVER[‘PHP_SELF’], ‘wp-admin’) && !strstr($_SERVER[‘PHP_SELF’], ‘wp-stattraq’))
    3. Change it read like this:
      if (
       !strstr($urlRequested, ‘wp-admin’) &&
       !strstr($urlRequested, ‘wp-stattraq’) &&
       !strstr($urlRequested, ‘favicon.ico’)
      )

    So that’s actually two key changes — we’re adding ‘favicon.ico’ to this list of exclusions, but we’re also making use of the new, smarter $urlRequested full-path rather than the PHP_SELF variable (which, again, is translating to just “index.php?” in my environment).

    Whew! So that’s the end of my first significant PHP/Wordpress/Stattraq adventure. Learned a bunch, and got enough of a dose of frustration to remind myself again why I don’t want to write code for a living.

    Comments off

    Evan tries to get DSL – an epic voyage in geekiness

    Now that this is all over, I can look back and laugh (and blog about it).

    This might end up as a long post, so here’s my outline to help you determine if it’s worth reading:

    1. Evan has cable modem already, and is paying way too much for it
    2. Evan gets a letter advertising a good deal on DSL
    3. Evan rejiggers the wiring closet to accommodate DSL in place of cable modem
    4. Evan orders DSL and receives the hardware
    5. Evan spends an entire weekend trying to get DSL to work right
    6. Evan gets pissed and sends back the DSL stuff

    Ok, that’s a fair timeline, I suppose. Let’s drill in!

    I got a good deal on cable-modem when we first moved into the condo. Something like 1/2 price for a couple of months and then $10 off each month for a couple more. We could have gotten a similar deal on DSL at first, also, I have no doubt. But because of the way our condo is wired, it would have been harder to do DSL right away (short version: if you only have one Cat5 plug where the DSL modem needs to go, how do you get both the DSL in and the Ethernet out).

    In any event, we’ve had cable modem for 9 months or so. It’s worked fine: fast, only a couple outages, etc. Our cable company (Millennium Digital Media) is not the best I’ve ever had, particularly for customer service when things go down, but they contract their NNTP through giganews which provides a much better feed than Time Warner ever did back in Charlotte.

    So why try to change? Well, MDM charges $49/month for cable modem. And that’s $49/month *IF* you have your own cable modem (which, after paying $7.50 extra for a month or so… we now do!). So it’s hella-expensive. I think I paid $42/month *INCLUDING* cable modem rental in Charlotte. And for my $49/month I get 2–3mb down and 256kb up. Not stellar.

    When Qwest (DSL) sent me the advert, I thought to myself… hmm… I can switch over to DSL with just a bit of rewiring in the wiring closet, I can end up with 1.5mb down + 768kb up, and I can do it for $19.95/month (I think it would have actually been $24.95/month… the advert was not clear) for a year.

    How can you pass that up?! So I ordered it online, waited a couple of days, and received a box from Qwest with my new DSL modem.

    It took me about 10 minutes to swap the Actiontec DSL modem Qwest had sent in for our Dlink cable modem.

    Right away, any of you with Qwest DSL should be thinking to yourself…wait a second… an actiontec DSL “modem”?? Right. It’s actually an actiontec DSL “modem+wireless router”. And therein lies the rub:

    • Qwest requires PPPoA, (the “A” stands for ATM). This is roughly the equivalent of PPPoE, except that my existing router doesn’t support it (nor does any other router I’ve ever owned). PPPoA (or E) is what is used by Qwest to force you to authenticate across their DSL loop to the actual ISP who services your connection. See, for DSL – at least here – the DSL loop and ISP are decoupled.
    • The actiontec device is a router, not just a modem. It’s not just terminating the loop. It’s actually authenticating my account to the ISP and being given an IP address.
    • Then the actiontec works just like any other router you might buy to issue out NAT addresses (192.168.0.x) to a device on the one ethernet port.
    • Qwest doesn’t support it being used in any other way

    So, in case that bullet list isn’t clear, here’s the shorter version: I couldn’t just drop in the DSL modem/router as a replacement for my cable modem have it work.

    Drat. Ok, I’m resourceful. I rejiggered the configuration on stuff now in addition to the physical layout changes. Tried using the DMZ function of the actiontec router. Tried turning off all of the authentication on the actiontec (‘transparent bridging’) and doing the authentication from my existing router. Tried replacing my existing router with just the actiontec (and configured the various port forwarding, etc).

    The last part “sort of” worked. But it broke a lot of configuration for my internal network. For instance, I could no longer connect to my website directly from the inside. With my previous setup, when it resolved the domain name “www.doddsnet.com” to whatever is the outside IP, my router would let me connect basically right through outbound, turn around, and port-forward it back to the inside. Worked great. But not with the actiontec. No matter what I tried to turn off on the actiontec, the web administration stuff kept coming up.

    So I had to set up split-DNS and host the “doddsnet” domain internally to my router in addition to the outside view. Suck.

    Plus, inbound mail kept getting queued up irregularly. The port 25 forwarding didn’t seem to work well on the actiontec. Totally not sure why this was. I just know that after leaving it running for an afternoon, I came home knowing that it wasn’t working since a bunch of “test” email I had sent inbound had never arrived. I swapped back in the cable modem and the original software configuration and POOF it all started working again (and all the queued mail delivered).

    By this point, I had spent all day Saturday and 1/2 day Sunday working on this. I had called Qwest four times and spent probably close to an hour and a half on the phone with them. As a final effort, they suggested I get a static IP. But after doing some research, they returned to the line to inform me that not only could I not get a static IP on the deal that I was on, but that it wouldn’t solve the problem (getting a real IP address to my router) anyways because of PPPoA.

    Whew, long post. I’m proud of anyone who makes it this far.

    My summary: I’m quite disappointed that Qwest has the technical limitations that they do around their DSL solution. It’s a shame they use PPPoA. It’s a shame they cram a router down my throat instead of providing a transparently-bridged DSL modem like BellSouth did for me when I last had DSL a few years back. It’s a shame their technical support people have to deal with the ideosyncracies of a bunch of different ISPs, each with different features and behavior. It’s a shame.

    Comments off