Help wanted: Apache/PHP

I’m planning on sticking with TypePad as my weblog host once everything opens up officially (tomorrow, from the looks of it). However, this poses a bit of a problem. While I’m slowly moving all of my old posts from my old weblog to this new site, there are still lots of links scattered throughout the ‘net that point to the old addresses.

I think I know of a solution, however, I’m not well enough versed in the intricacies of Apache and PHP to pull it off on my own. So, I’m asking for help!

Here’s what I’d like to do…

All of my old posts reside at my personal server at http://www.djwudi.com/longletter/. It’s a Mac OS X computer running Apache, with PHP enabled.

I know that Apache can handle redirects, based on rules set up in the httpd.conf file. I also know that pattern matching and text string munging can be carried out in PHP.

All of my old individual entry pages are stored in my webserver with the following directory structure:

http://www.djwudi.com/longletter/archives/year/month/day/dirified_post_title.php http://www.djwudi.com/longletter/archives/2003/07/31/help_wanted_apache_php.php

All of the pages on this new site are stored using a similar, but slightly different directory structure:

http://djwudi.typepad.com/eclecticism/year/month/truncated_title.html http://djwudi.typepad.com/eclecticism/2003/07/help_wanted_apa.html

What I’m envisioning for the final system is this:

  • Anytime my webserver receives a request for a page that resides within the ‘/longletter/archives/’ directory, Apache redirects to a customised PHP script on my server.
  • That script does three things:
    1. Presents a simple page to the user with wording to the effect of “This site has moved, one moment while we redirect you…”.
    2. Looks at the requested URI and converts it to what the new URI should be. As I’ve kept post titles consistent, and the directory structures are similar, this should be fairly easy with the right regular expressions.
      1. Parse the requested URI.
      2. Remove everything before the 4-digit year and replace it with the new base address.
      3. Remove the 2-digit day.
      4. Truncate the post title to fifteen characters.
      5. Remove the .php extention and replace it with .html.
    3. Redirects the users browser to the new, correct URI.
  • Hey presto, we’re done — no matter which page was linked to at my old site, the user has been redirected to the corresponding page at my new site.

More brainstorming:

  • The above method works well for links going to individual pages, but what about category archives or the main index page itself?
  • Could the PHP script be made smarter? For instance…
    1. If the requested URI contains the year/month/day/title.php string, then the above transformation and redirect is processed.
    2. If the requested URI contains any other string (in other words, it doesn’t point to a specific post), then a page is presented that says something along the lines of “This site has moved, one moment while we redirect you to the new site…”, and a redirect is passed to the user’s browser that points to the index page of the new weblog.

Anyway, that’s what I’d like to do. It all seems straightforward enough in my brain, and I think that the technology I have available should be able to handle it all without a problem — I just don’t have the faintest idea how to code it.

Any and all advice, hints, tips, or straight-up solutions would be greatly appreciated. I’m not rich enough to offer untold wealth or cool prizes or anything, but I can offer much gratitude, public thanks and kudos, and probably pizza and beer (or a PayPal donation to a ‘pizza and beer’ fund, or some such thing).

And you won’t even have to fight me for the beer — I can’t stand the stuff. ;)

Help search engines index your site

We all know that Google is god. Chances are you’ve used Google when doing a search on the ‘net at least once, if not daily, or many times a day. If not, then I’ve heard rumors that there are other search engines out there — though I haven’t used any in so long, I can’t really vouch for the veracity of that rumor. ;)

I wanted to share a few tricks I use here to help Google (and other search engines) index my site, and to try to ensure that searches that hit my site get the most useful results.

All of the following tips and tricks do require access to your source HTML templates (in TypePad, you’ll need to be using an Advanced Template Set). While I’m writing this for an Advanced TypePad installation, the tips will work just as well in any other website or weblog application where you have access to the HTML code.

Specify which pages get indexed, and which don’t

What? One of the most important pages on a weblog from a user’s point of view is the main page. It has all your latest posts, all the links to your archives, your bio, other sites you enjoy reading, webrings, and who all knows what else. However, from the perspective of a search engine, the main page of a weblog is most likely the single least important page of the entire site!

This is simply because the main page of a weblog is always changing, but search engines can only give good results when the information that they index is still there the next time around. I’ve run into quite a few situations where I’ve done a search for one term or another, and one of the search results leads to someone’s weblog. Unfortunately, when I go to their page, the entry that Google read and indexed is no longer on the main page. At that point, I could start digging through their archives and trying to track down what I’m looking for — but I’m far more likely to just bounce back to Google and try another page.

Thankfully enough, though, there’s an extremely easy fix for this that keeps everyone happy.

How? One short line of code at the top of some of your templates is all it takes to solve the problem. We’re going to be using the robots meta tag in the head of the HTML document. The tag was designed specifically to give robots (or spiders, or crawlers — the automated programs that search engines use to read websites) instructions on what pages should or shouldn’t be indexed.

For the purposes of a weblog, with one constantly changing index page and many static archive pages, the best possible situation would be to tell the search engine to read and follow all the links on an index page (so that it finds all the other pages of a site), but not to index that page. The rest of the site, it will be free to read and index normally.

That’s very easy to set up, as it turns out. The robots meta tag allows four possible arguments:

INDEX
Read and index a page normally
NOINDEX
Do not index any of the text of the page
FOLLOW
Follow all the links on a page to read linked pages
NOFOLLOW
Ignore all links on a page

So, in order to do what we want, we add the following meta tag to our document, in the head section, right next to the meta tags that are already there:

<meta name="robots" content="noindex,follow" />

Now, when a search engine robot visits the index page of the site, it knows that it should not index the page and add it to its database, however, it should follow any links on that page to find other pages within the site. This way, searches that return hits for the site will be sure to find your archive pages for the information that is requested, rather than your front page, which may not have the information anymore.

Update: It turns out that this technique may have some side effects that I hadn’t considered, and might possibly not work at all. For more details, please scroll down to Anode’s comment and my reply in the comment thread for this post. Hopefully I’ll be able to dig up more information on this soon.

Fine tune what sections of a page get indexed

What? There is a proposed extension to the robots meta tag that allows you to not just designate which pages of a site get indexed, but also which sections of a page get indexed. I discovered this when I was setting up a shareware search engine for my old website, and have since gotten in the habit of using it. Now, this is not a formal standard, and I don’t know for sure which search engines support it and which don’t — the creator of this technique has suggested it to the major search sites, but it is not known what the final result was.

Now, why would you want to do this? Simply this: on many weblogs, including TypePad sites, the sidebar information is repeated on every page of the site. There is also certain informational text repeated on every page (for instance, the TrackBack data, the comments form, and so on). This creates a lot of extraneous, mostly useless data — doubly so when that information changes regularly.

By using these proposed tags, any search engine that supports them will only index the sections of a page that we want indexed, and will disregard the rest of the page.

How? Because this is based on the robots meta tag discussed above, it uses the same four arguments (INDEX, NOINDEX, FOLLOW, and NOFOLLOW). Instead of using a meta tag, though, we use HTML comment syntax to designate the different sections of our document.

For instance, every individual archive page on a TypePad weblog that has TrackBack enabled will have the following text (or something very similar):

Trackback
TrackBack URL for this entry:
http://www.typepad.com/t/trackback/(number)

Listed below are links to weblogs that reference (the name of the post)

In order to mark this out as a section that we wanted the search engine not to index and not to follow (as the only link is to the page that the link is on), we would surround it with the following specialized tags:

<!-- robots content="noindex,nofollow" -->
<!-- /robots -->

For example, I would change the code in the TypePad Individual Entry template to look like this:

<mtentryIfAllowPings>
<!-- robots content="noindex,nofollow" -->
<h2><a id="trackback"></a>TrackBack</h2>
TrackBack URL for this entry:<br /><$MTEntryTrackbackLink$>
Listed below are links to weblogs that reference <a href="<$MTEntryPermalink$>"><$MTEntryTitle$></a>:
<!-- /robots -->
<mtpings>

The same technique can be used wherever you have areas in your site with content that doesn’t really need to be indexed.

Now, as I stated above, this is only a proposed specification, and it is not known which (if any) search engines support it. It also requires a healthy chunk of mucking around with your template code. Because of these two factors, it may not be an approach that you want to take, instead simply using the “sledgehammer” approach of the page-level robots meta tag discussed above.

However, I do think that the possible benefits of this being used more widely would be worth the extra time and trouble (at least, for those of us obsessive about our code), and I’d also suggest that should TypePad gain a search functionality, that these codes be recognized and followed by the (purely theoretical, at this point) TypePad search engine.

Put the entry excerpt to use

What? The entry excerpt is another very handy field to use in fine tuning your site. I believe that the field is turned off on the post editing screen by default, but it can be enabled by clicking on the ‘Customize the display of this page’ link at the bottom of the post editing screen.

By default, the entry excerpt is used for two things in TypePad: when you send a TrackBack ping to another weblog, the excerpt is sent along with the ping as a short summary of your post; and it is used as the post summary in your RSS feed if you have selected the ‘excerpts only’ version of the feed in your weblog configuration. However, it can come in handy in a few other instances too. One that I’ve discussed previously is in your archive pages. However, the excerpt can also be used to help out search engines.

You may have noticed that when you do a search on Google, rather than simply returning the link and page title, Google also returns a short snippet of each page that the search finds. Normally, this text snippet is just a bit of text from the page being referenced, intended to give some amount of context to give you a better idea of how successful your search was. There is a meta tag that lets us determine exactly what text is displayed by Google for the summary, though — which is where the extended entry field comes in.

How? We’re adding another meta tag here, so this will go up in the head section of your Individual Archives template. Next to any other meta tags you have, add the following line:

<meta name="description" content="<$MTEntryExcerpt>" />

Then save, and republish your Individual Archives, and you’re done. Now, the next time that Google indexes your site, the excerpt will be saved as the summary for that page, and will display beneath the link when one of your pages comes up in a Google search.

So what happens if you don’t use the entry excerpt field? Well, TypePad is smart enough to do its best to cover for this — if you use the <$MTEntryExcerpt$> tag in a template, and no excerpt has been added to the post, TypePad automatically pulls the first 20 words of your post to be the excerpt. While this works to a certain extent, it doesn’t create a very useful excerpt (unless you’re in the habit of writing extremely short posts). It’s far better to take a moment to create an excerpt by hand, whether it’s a quick cut and paste of relevant text in the post, or whether it’s more detailed (“In which we find out that…yadda yadda yadda.”). In the end, of course, it’s your call!

Use the Keywords

What? Keywords are short, simple terms that are either used in a page, or relate to the page. The original intent was to place a line in the head of an HTML page that listed keywords for that page, which search engines could read in addition to the page content to help in indexing.

Unfortunately, keywords have been heavily abused over the years. ‘Search Engine Optimizers’ started putting everything including the kitchen sink into their HTML pages for keywords in an effort to drive their pages rankings higher in the search engines. Because of this, some of the major search engines (Google included) now disregard the ‘keywords’ meta tag — however, not all of them do, and used correctly, they can be a helpful additional resource for categorizing and indexing pages.

How? One of the various fields you can use for data in each TypePad post is the ‘Keywords’ field. I believe that it is turned off by default, however you can enable it by clicking on the ‘Customize the display of this page’ link at the bottom of your TypePad ‘Post an Entry’ screen.

Once you have the ‘Keywords’ field available, you can add specific keywords for each post. You can either use words that actually appear in the post, or words that relate closely to it — for instance, I’ve had posts where I’ve used the acronym WMD in the body of the post, then added the three keywords ‘weapons mass destruction’ to the keywords field. You never know exactly what terms someone will use in their search, might as well give them the best shot at success, right?

Okay, so now you have keywords in your posts. What now? By default, TypePad’s templates don’t actually use the data in the Keywords field at all. This is fairly easy to fix, however.

In your Individual Archives template, add the following line of code just after the meta tags that are already there:

<meta name="keywords" content="<$MTEntryKeywords$>" />

Then save your template, republish your site (you can republish everything, but doing just the Individual Archives is fine, too, as that’s all that changed), and you’re done! Now, the next time that a search engine that reads the keywords meta tag reads your site, you’ve got that much more information on every individual post to help index your site correctly.

Conclusion

So there we have it. One extremely long post from me, with four hopefully handy tips for you on how you can help Google, and the rest of the search engines out there, index your site more intelligently. If you find this information of use, wonderful! If not…well, I hope you didn’t waste too much of your day reading it. ;)

Feel free to leave any questions, comments, or words of wisdom in the comments below!

Our friend, the humble 'title' attribute

Earlier this evening, I got an e-mail from Pops asking me how I created the little tooltip-style comment text that appears when you hover over links in my posts. I ended up giving him what was probably far more information than he was expecting, but I also figured that it was information worth posting here, on the off chance it might help someone else out.

It’s actually a really easy trick, though not one built into TypePad. Simply add a title declaration to the link itself. For instance, if I wanted the text “Three martinis and a cloud of dust” to appear when someone hovered over a link to Pops’ site, I’d code it like this:

<a href="http://2hrlunch.typepad.com/" title="Three martinis and a cloud of dust">Two Hour Lunch</a>

The end result looks like this (hover over the link to see the title attribute in action):

Two Hour Lunch

That little title attribute comes in wonderfully handy, too, as it can be applied to just about any HTML tag there is.

For instance, good HTML coding includes alt text for all images, so that if someone has image loading turned off in their browser, or if the image fails to load for any other reason, there will be some descriptive text to tell them what gorgeous vistas they are missing. However, in most browsers the only time that text shows is if the image doesn’t load. Using the title attribute in addition to the alt attribute when adding images, we can create that same style of comment when someone hovers over the image. For example:

<img src="lalala.gif" width="360" height="252" alt="NOTICE: I'm not listening!" title="La la la la la la!" />

That way, when displayed in the browser, if the image didn’t load, the text ‘NOTICE: I’m not listening!’ would show instead. In addition, the text ‘La la la la la la!’ will appear if someone lets their cursor pass over the image. Not a necessary thing, but it can be fun for quick, pithy little comments. Here’s the example:

NOTICE: I'm not listening!

Another place I use title tags fairly regularly is when I make changes to a post after it’s first posted. HTML includes two tags (<ins> and <del>, for insert and delete, respectively) for marking up changes to text. When I go back in to edit a post after it first appears on my site, I use those tags with a title attribute to indicate when the change was made.

For example, suppose I posted the following:

Pops is a screaming loony, who shouldn’t be allowed within twenty yards of anyone who isn’t equipped with body armor and a machete.

Later, coming to my senses, I could change that like this:

Pops is a <del title="7/30/03 10pm: I think I was on drugs when I wrote this.">screaming loony, who shouldn't be allowed within twenty yards of anyone who isn't equipped with body armor and a machete</del> <ins title="Here's what I meant to say...">great guy, whose website has pointed me to some fascinating tidbits on a regular basis</ins>.

(I hope Pops doesn’t mind the sample text here.) ;)

On screen, after the update, the deleted text would display as struck through, and the inserted text would display underlined (standard editing notation), with the comments displaying on a cursor hover, like this:

Pops is a screaming loony, who shouldn’t be allowed within twenty yards of anyone who isn’t equipped with body armor and a machete great guy, whose website has pointed me to some fascinating tidbits on a regular basis.

So there ya go — more information on the humble little ‘title’ attribute than you probably ever wanted or needed to know. I hope it helps!

Update: (See? There’s a title attribute right there!) As of this writing, the title attribute is barely supported in Apple’s new web browser, Safari. Titles on links will appear in the status bar at the bottom of the window if the status bar is turned on, but that’s it. No other title text will be visible. I’m hoping that this is fixed in a later update to Safari, but for the moment, that’s what we have to work with.

Pet Peeves

Can we please please please stop using the target="new" attribute in links? I don’t want a new window. If I do want a new window, then I’ll right-click and use the “Open link in new window” command. But I don’t want you deciding that I must want a new window, just because you don’t want me taking the oh-so-horrid step of actually (gasp) leaving your site!

If you’ve got a good site, I’ll use the “back” button and come back. If you don’t have a good site, I’m not likely to come back no matter what the circumstances. But constantly forcing every link to open in a new window, taking control of how I browse away from me, you’re a lot more likely to piss me off to the point where I won’t come back than if you just let me browse normally.

Thank you for your time.

(And on that note, yes, I know that clicking on someone’s name if they leave a comment here does the exact thing I’m bitching about. I haven’t figured out how to get around that yet. If I can, you can be damn sure that I’m turning that little “feature” off.)

Update: Usability Guru Jakob Nielsen also hates this practice, so I’m not alone. Opening new windows for links breaks items one and two of his Top 10 new mistakes of web design article. So there. Bleah. :P

911survivor: Game? Art?

A couple days ago, I linked to something called 911survivor (the site is down as of this writing) in my ‘Destinations’ sidebar. The site was about an Unreal game modification that replaced the standard sci-fi battle arenas with the World Trade Center towers during the Sept. 11 terrorist attacks. At the time, it looked to me like a surprisingly disturbing attempt to capitalize on the tragedy of the day, and I commented on the link as being tasteless.

This morning, Kirsten left a comment letting me know that while at Siggraph, she had met one of the creators of the 911survivor mod.

something to think about – the game is not a ‘game’ but an art mod (game modification). there are no points, there is no way to win, etc. the point of the game (art piece) for them was to explore the real experience of the victims in the WTC and to combat the commercialization of the event by big media. players also must realize the real experience and the real horror of that day (which has been glossed over by an administration and media that capitalizes on the event).

I mentioned that perhaps they should have made more of an indication of their intent on their website, as it wasn’t clear at all to me upon first viewing it what the point actually was.

Later, Kirsten was able to come back with a little more information, and she also said this:

if this is art…then truly the artist doesn’t have to offer you their interpretation on the subject. modern art never does. it simply presents itself, and then lets you decide. you therefore become a part of it through interaction and the decision process.

While searching around for more information on this piece of work, as their site seems to have gone down, I found this post at Fridgemagnet. In one paragraph, they managed to both grok the concept of the piece long before I did, but also touch on the very reason why I made the initial assumption that I did:

The level of customization allowed by Doom, then Quake, Half Life, Unreal etc, makes for an interesting artistic medium. We’ve had all sorts of ideologically-driven mods and FPSes already – see the America’s Army game (now available for Macs it seems) and that race-hate Quake mod where you get to kill Jews and blacks. It doesn’t appear that this is a propaganda piece, but it is going to be designed to deliver a message of some sort, whatever the designers want to say about 9-11. Assuming it’s not just publicity trash.

This started me wondering about two things in connection to this. Firstly, the role of the media used for a piece of work; and secondly, when introducing a new type of media, what responsibility the artist might have when the public finds that work.

I think that part of the issue I had where 911survivor is concerned is simply that the medium used here — the game interface — is one that hasn’t been used before (that I have heard of, at least) as an artistic medium. When presented with a gaming environment, my first thoughts are that the subject matter is intended to be just that: a game, some form of entertainment. Hence, when I was browsing the 911survivor site, seeing their concept art of panicked businessmen and women and a schematic of the floors affected by the impact of the airplane, and looking at the screenshots of walls of flame and bodies falling to the ground, I didn’t make the assumption that “this can’t be a game, therefore it must be some sort of interactive art project.” Instead, it appeared to simply be a game — a game with a truly disgusting choice of subject matter.

Given that, then, should it have been more obvious what the intent of the work was? Kirsten says that the artist “doesn’t have to offer you their interpretation on the subject.” Certainly true enough, but the majority of the time when seeing art, even when it’s art we haven’t seen before, we do know that it is art. We may not understand it or like it, we may wish that there was more interpretation provided for us, we may not understand the artists intent — we may not even agree that it should be called art. But whatever our reaction, we know that the artist intended their creation to be some form of art. With 911survivor, I had no such reference to work with.

While I’ve been working on this post, Kirsten was able to update her site with more details on what she heard during the workshop where this project was discussed.

The game was made by a group of students for a class (if memory serves) who had not been present at the fall of the towers in NYC, but felt that the media had been capitalizing on the situation and thus glossing over the horrific reality of the event). The game was never supposed to be publicized, it was simply a way for the students to understand the event and to ‘be a part of it’ as it were. The speaker mentioned that so often memorials of wars and tragedies gloss over and distort the truth of the situation, that the horror and the sorrow that was truly there is covered up as much as possible, and instead an idealistic presentation of the situation is given as a sort of ‘reaffirmation’ of life. However, this prevents future generations from understanding the pain/sorrow/horror of the original event. This game actually presents a significant attempt at building a new art form (in my humble opinion) by creating a truly interactive medium in which people feel trapped, upset, frustrated, frightened, disgusted, etc. by a piece of art that is truly interactive….

That bit of information alone does a lot to explain the nature of the project to me, and I have to say, I agree with a lot of the motivations mentioned here. The media (and the government) has not only glossed over the horrors of that day in the intervening months, but has gone on to capitalize on it in ways far more disturbing and far-reaching than I originally took this game to be attempting. Over the past two years, the fall of the WTC has gone from being presented as the tragedy that it was to being the justification for our incursions into foreign governments halfway around the world. 9-11 has become a motivation for revenge for far too many people (and to make it worse, that revenge hasn’t even been directed at the right targets, thanks to the propaganda techniques of our current administration).

I guess it was the combination of the medium of the game engine; the lack of a clear disclaimer that they were using the game engine because it was the best technology for their purpose, not because they were actually attempting to create a ‘9-11 game’; a website that seemed to support my initial assumption that it was a game; and the horrific imagery based on real events and real deaths that disturbed me. Knowing more about it now, I can understand and respect the aims of the creators. However, given the combination of a new medium not traditionally used for anything other than entertainment purposes, and the subject matter of the work, a little more caution and straightforward stating of ideals on the website may have been very much in order.

Importing MT archives: month by month

I’m starting work on importing my archives from The Long Letter into Eclecticism.

What I’m dealing with is simply the fact that I have archives dating back to November of 2000. While Movable Type has an ‘export’ feature, it exports everything. With fewer posts, that might be less of an issue, but since I’m going to have to go through post-by-post to double-check URLs, add pictures, and so on, I wanted to see if I could figure out how to import one month’s worth of posts at a time, instead of the whole kit and kaboodle.

Here’s what I ended up with…

  1. Create a new Archive Template, and put the following code into the template:
     TITLE:
     AUTHOR:
     DATE:
     PRIMARY CATEGORY:
     CATEGORY:
     -----
     BODY:
    
     -----
     EXTENDED BODY:
    
     -----
     EXCERPT:
    
     -----
     COMMENT:
    
     AUTHOR:
    
     EMAIL:
    
     URL:
    
     IP:
    
     DATE:
    
     -----
     PING:
    
     TITLE:
    
     URL:
    
     IP:
    
     BLOG NAME:
    
     DATE:
    
     -----
     --------
    
  2. In MT, under the ‘Weblog Config’ button, go to the ‘Archiving’ section. Click the ‘Add new…’ button, set the Archive Type to ‘Monthly’, and the ‘Template’ to the name of the new template that you just created, then click ‘Add.’

  3. You should now have two options under the ‘Monthly’ archive type. Switch over to the new archive template that you just created, and put the following in the ‘Archive File Template’ box:

    export/.txt

  4. Click the ‘Rebuild Site’ option, choose the ‘Rebuild Monthly Archive Only’ option, and click the ‘Rebuild’ button.

    Once MT is done rebuilding, you should have a series of files inside an ‘export’ directory inside your site’s archives directory (in my case, that ended up being /longletter/archives/export/, your configuration may be slightly different). There will be one file for each month, named something like 2003-07.txt.

  5. In TypePad, under the ‘Manage’ tab for your weblog, choose the ‘Import/Export’ option. In box A, put in the URL for your first month’s export file (for me, this was http://www.djwudi.com/longletter/archives/export/2000-11.txt). Leave the ‘Encoding’ drop-down menu set to ‘Unicode’, and hit the ‘Import’ button.

  6. There is no step 6. You’re done!

So that’s it. Now that I can go month by month, I’ll import one month, go through each post to make sure all the links are correct, then move on to the next month. This will probably take a while, as I’ve got close to three years of posts to check, but I’m on my way!

And the word ‘PROJECT’ flashed before my eyes…

Custom MT skins?

Custom MT interface

So it appears that SocialDynamX, creators of FMRadio for Radio Userland (Disclaimer: I know nothing about either of these products) is working on creating a custom interface for MovableType.

First impression? Ugh, that’s horrid.

Now, I’m a little unclear from looking at their site as to whether that’s a replacment for the default MT interface that you see in your web browser (as is implied by the term “skin”), or whether it’s a seperate standalone program for posting to MT (such as Kung-Log, UserSpace or Zempt). If it’s a standalone program, then okay, it’s most likely Windows-based, and the horrid ugly interface makes sense. But if it’s a “skin” designed to replace the standard MT interface within the browser — why is it so verschluggene ugly?

I was going to go into more detail, but I’ve gotta head off to work, and I’m out of time. Judge for yourself, I guess. ;)

(via Scoble)

BuyTunes blows

Earlier this week BuyTunes popped up attempting to capitalize on the success of the iTunes Music Store by moving the same general idea to the Windows platform.

So far, the word is that they suck.

I already knew that they were blatantly ripping off Apple’s ads. I’d link to the BuyTunes versions, but that brings up the second major issue: they’ve restricted their website to Internet Explorer for Windows only. Any other browser, and you get redirected to this page. So far, things weren’t looking very good.

Then Jennifer at ScriptyGoddess actually tried to use BuyMusic’s services. Let’s just say that she’s not a satisfied customer.

First problem. After you buy an album, you need to download it. Sure, I knew that. What I didn’t know is that you have to download EACH SONG INDIVIDUALLY. One click per song. With Two large sized albums with many songs on it – it can be just a LITTLE annoying.

[…]

Second problem. Before each song plays – it has to download and verify your license. You can’t mulitple select a bunch and do this. You need to do this before EACH SONG will play.

[…]

Third and VERY big problem. […] Since I’m using Windows200, they force you to use a windows media plugin…[that] CRASHES consistently EVERY time I try to burn a CD. It is simply impossible to create a cd from my machine using that plugin.

[…]

And here comes problem number four. The “Main” license is the one I downloaded the first time to my machine (the windows 2000 box with the defective Roxio plugin). Subsequent downloads are “secondary licenses” from which you are not allowed to transfer to a mobile device, burn a cd, or do ANYTHING with except listen to them on that one machine.

[…]

In walks problem number five. Here’s their oh-so helpful (probably computer generated) form letter to me…

We apologize if you have experienced trouble downloading your music to a digital media player or copying your music to a CD. Unfortunately, We are unable to provide technical assistance after you have downloaded the music from BuyMusic.com to your primary computer. In addition, we are unable to credit you back for failed or damaged copies once you have successfully downloaded the music to your primary computer.

Sounds like BuyMusic is bound to be a bust, to me.

About 'Noises'

I wanted to take a moment to draw attention to the ‘Noises’ section of the sidebar to the left of this page. I’m tossing albums up there at more or less random intervals (often determined by what I’m listening to at any given point in the day). When I do, though, I’m highlighting three key tracks from the album and adding streaming audio ‘PLAY’ links to them, in addition to one for the full album.

The albums won’t stay up indefinitely, and the tracks aren’t downloadable (streaming only, sorry), but this should let anyone stopping by take a quick listen to whatever I’m recommending — and, of course, clicking through the picture lets you buy it from Amazon.

Enjoy!