replacing a failed Sun LVM mirror

The problem with mirroring your disks is that one side of the mirror will invariably fail two weeks later. This has happened to me several times, first under NetBSD (with its excellent RAIDFrame technology, a worthy competitor, functionally, to Sun Volume Manager) and now with the Sun LVM mirror that I set up several weeks ago and documented in this very blog.

I called Sun support, and they shipped me a new disk. Here’s how I went about replacing the failed device, without incurring any downtime (yay, Sun hot-swappable parts)! Continue reading

rebuilding Asterisk from scratch

As I mentioned in a previous post, I realized that the knowledge I was going to get out of Asterisk was limited by the amount of hand-holding that Asterisk@Home provides. Don’t get me wrong — A@H is a great way to get started with Asterisk, as it comes with a huge variety of features already built in. However, for someone who is a little more happy hacking about and getting to know every nook and cranny of his VoIP system, I realized that I’d have to start over.

I was also eager to rebuild the VoIP server using a BSD. I find Linux to be just too bloated for use as a VoIP server, and I was also interested in seeing how far NetBSD has come from the days when I used it last; my last NetBSD machine ran 1.6, and they’re onto 3.0 by now. I’m very familiar with the progress made by the FreeBSD Project, and am interested to see how NetBSD stacks up. Continue reading

TheDailyWTF.com on AJAX and Web 2.0

If you work in IT, and you don’t already read The Daily WTF, you should. The site bills itself as documenting “curious perversions in IT” and I have to say that this is an understatement; the code that frequently shows up there is bad enough that the word “poor” does not begin to describe it. Sadly, I think much of the code behind so-called enterprise-grade software out there in the world is of the same calibre. We should be afraid.

I had to laugh at their recent skewering of AJAX and Web 2.0. I’ve complained about such things before, but this one does it in such a brilliant way that I really don’t have to say much more:

The introduction of the XMLHttpRequest component opened the doorway for a new breed of “fancy schmancy” web applications like Flickr, GMail, etc. This, in turn, spawned an entire sub-industry and a new series of buzzwords seemingly based on the names of household cleaning chemicals. It even incremented the current version of the Internet to 2.0.

Although the Web is apparently now at version 2.0 much of the software continues to be in beta.

the risks of “outsourcing to the web”

It seems that within the last few months, much has been made of so-called Web 2.0 sites. The fact that it is impossible to even attribute a noun to describe “Web 2.0” — is it a paradigm? a metaphor? a meme? (ugh) — should be enough to convince you that “Web 2.0” is just the latest buzzword to describe webpage design and innovation, but I digress. My objective today isn’t to complain about the use of the term Web 2.0, but to talk about one alarming aspect of these new Web 2.0 sites: the fundamental outsourcing of your private data storage to commercial entities.

Many of the new, highly interactive web properties like Flickr, Gmail, Friendster, and the like, use sophisticated technology — at least in the context of the Internet — to make their sites operate much like thick clients (traditional software running on your desktop computer). One of the primary technologies in use, of course, is Asynchronous Javascript and XML (AJAX), which creates the illusion that you are interacting with a web application without a connection to the server. This has made it possible to create web-side lookalikes of traditional desktop applications, such as e-mail (e.g. GMail), bookmark managers (e.g. del.icio.us), CRM applications (e.g. Salesforce) and even office applications like a word processor (e.g. Writely). A number of factors have made these sites particularly attractive to the end user:

  • ease of use
  • no need to worry about data storage on one’s local (volatile) storage device
  • no need to be maintain one’s locally-installed software, apply security/bugfixes, etc.
  • ability to quickly “upgrade” to new vendor-released versions since the application is centrally-managed

We can expect the adoption rate of these applications to increase both as more users discover their utility, and as more such applications are created.

In the move from the desktop to the web (the so-called “outsourcing to the web”), many issues such as privacy, data retention, etc. are frequently glossed over or simply not recognized by end users as being important. It is difficult for many users to understand even one site’s privacy policy, never mind five or ten. The perplexing question of “How is the data from my personal documents such as e-mails, letters, word-processing files, etc. being used?” may not be adequately answered even by privacy policies, because such privacy policies often cover only the information being stored itself, not any derivative works. By derivative works, I mean that statistical data about your e-mail or Writely documents might be used to target ads to you, or the aggregate statistics of word frequency amongst all Writely users might be shared or sold to marketers for data mining purposes. As one marketing guru said to me recently, the possibilities for data mining are endless (the exact words he used were “we data mine the hell out of things!”)

Another big concern is that many of the applications currently being created revolve around Google in some way. Not only has Google been the primary developer of many rich web applications, with products such as Google Maps, Google Desktop, Google Page Creator, Writely and of course, GMail, but many other developers have taken advantage of Google’s open API to create their own derivative applications (such as Frappr). What happens when Google decides to use the stored user data in new ways? Or what if Google, formerly seen as the benevolent hacker’s workshop, changes its tune and becomes more corporate and controlling like Microsoft? The concentration of power around one publicly-traded corporation should be alarming to any consumer. (I could repeat the same arguments about Yahoo, given attempt to compete in the same space as Google, but by acquisition — their purchases of del.icio.us, upcoming.org, and so on being prime examples of this strategy.)

What is the solution? As I pointed out already, I expect the adoption of such rich applications to increase, not decrease; not only because of their technological merits, but because they frequently build online communities that are appealing to users (and marketers, of course). However, I think that any user who values his or her privacy and finds the notion of data mining based on one’s personal correspondence to be uncomfortable would do well to continue using traditional desktop software to manage these.

are you also evolvolving?


(The above is a hilarious typo in the website for VON Canada.)

This month’s Toronto Asterisk Users’ Group meeting was held at the Voice on the Net Canada 2006 conference. Given the audience (business users and implementers of largely commercial telecommunications equipment) Asterisk was probably a new concept to them, which meant that some of the presentations at TAUG were aimed at an entry-level audience.

Still, there were some really cool Asterisk add-ons demonstrated. One such patch was the Asterisk Real-Time Voice Changer, which lets you alter the pitch of your voice in real-time. It’s great fun for pretending to be a secret informant. Claude Patry, one of the developers of the patch, noted that if you have access to the Asterisk CLI, you can even do this to someone else’s voice call in progress — a very evil use, to be sure, but a great way to get back at your co-workers that piss you off.

Iotum demonstrated their “relevance engine”, whch is basically a rules-based engine for determining priority and subsequent routing of incoming voice calls — so for example, if my girlfriend called me, I could get alerted over instant messenger, but lower-priority folks would get shunted to voicemail. Of course this is a trivial example, as the rules taken into account could also be things like “do I have a meeting scheduled with this caller later in the day”, or “I’m expecting a call from such-and-such a person today”.

I’ll probably be reinstalling my Asterisk@Home system with a regular Asterisk installation sometime soon, so I can get a better idea about how things are all put together.

trying out Asterisk@Home

I’ve recently been getting into voice-over-IP telephony, both due to my dayjob (where I’m now responsible for managing a very expensive but full-featured Cisco VoIP System) and my long-time desire to build a hobbyist PBX at home using Asterisk. I’d set up Asterisk under a FreeBSD 5.4 server some months ago, but got as far as installing a demo dialplan before I got distracted. This time around I decided to give Asterisk@Home a spin, because it bundles many common Asterisk add-ons and features into an easy-to-install ISO backed by CentOS 4.x. (For those who don’t know what CentOS is, it’s basically a straight recompile of RedHat’s popular Enterprise Linux product, and as such, available for free.) Continue reading

Internet nostalgia

I’ve been “on the Internet” (a term which, by the way, makes no sense) for about twelve or thirteen years now, and although this makes me a young ‘un from the perspective of those folks who invented TCP/IP, I still remember enough of the days before the World Wide Web to have some nostalgia for the way the Internet used to be. I bring this up because I just came across a little notebook in which I used to write down relevant URIs and other ephemera. Some of the gems in there:

  • a listing of my favourite Archie servers
  • directions for accessing the e-mail anonymizer that used to live at anon@anon.penet.fi
  • my old FidoNet e-mail address (julian.dunn@f11.n241.z1.fidonet.org)
  • logon information for my various FreeNet accounts such as the now-defunct Cleveland FreeNet
  • a listing of Gopher servers upon which one would have found current price quotes for Macintoshes
  • various sundry BBS telephone numbers that I’m sure are all out of service by now
  • information on how to get to my Dad’s VAX account via Datapac (anyone know if Bell Canada is still operating Datapac these days?)

I was surprised to not only find that I’d written down information for accessing the Internet Oracle (a/k/a rec.humour.oracle) but that the Oracle is still going strong.

Anyone else have old Internet memories they’d like to contribute?

setting up Solaris zones

I promised to follow up on the last article about Solaris Logical Volume Manager with one about setting up Solaris zones, so here it is.

For those of you not in the know, Solaris zones (or containers; the terms are used interchangeably) is Sun’s virtualization technology, similar to Microsoft Virtual Server or VMWare‘s products. However, the “guests” (or “non-global zones” in Solaris-speak) must also be Solaris, and effectively run the identical base system as the “host” (or “global zone”). This is quite similar to the way FreeBSD’s jails work.

Sun is pushing the zone technology very hard these days, due to virtualization technology being the hot topic in IT at the moment. Solaris Zones do have some interesting advantages over even FreeBSD jails, namely:

  • patches applied in global zone are automatically applied to the non-global zones (for the most part), easing maintenance;
  • ability to share the pkgdb from the global zone to the non-global zones;
  • ability to easily loopback-mount global zone filesystems from within non-global zones;
  • ability to do some resource control (CPU shares only) upon the non-global zones

I predict that Sun engineers are working very hard on adding more knobs to the last item, so that you’ll eventually be able to control how much swap, RAM, etc. that the non-global zones are using.

Continue reading

oops, we didn’t QA patching on zone-enabled systems

(I’m still writing my article on setting up zones under Solaris 10. Bear with me while I assemble all the relevant details)

I just got hit by this bug:

Transition patching (-t option) is not supported in a zones environment.

Basically, you can’t patch a system with non-global zones installed without manually hacking an rc script! As the last comment in the thread says, “Hmm, the thing that most concerns me is that a bug that obvious should have been found in even the most cursory testing.”