PowerPoint, meet HTML Slidy

I can’t claim to be the first one to think that maybe PowerPoint is far too heavyweight of a tool for making presentations. In fact, Edward Tufte has said this before and in fact argues that PowerPoint makes people dumber. Having been at the receiving end of many sleep-inducing PowerPoint-driven presentations, I can heartily agree.

From a technical perspective, though, PowerPoint really is a bloated product for a task that should be fairly mundane. Into the breach steps HTML Slidy, which comprises two simple pieces: a CSS stylesheet, and a bunch of Javascript, to make a PowerPoint slide-show workalike in XHTML. Each slide is contained in a <div class="slide"> tag, and the Javascript lets you use the familiar PowerPoint navigation to get through the slideshow (forward and back arrows, clicking the slide for the next one, etc.) The only downside — and one that is easily solved with custom editor plugins for any XML editor, in my view — is that one must code all the slides by hand. Some may argue that this may be a good thing (having to hand-bomb everything forces one to think about the content more carefully).

The other interesting presentation-related product I saw recently was TeamSlide. This is a YAW2 AJAX-fronted application (a PHP program) simulating the parts of Microsoft Live Communication Server or WebEx that allow you to share a presentation with other participants. Advantage of TeamSlide: it’s only USD $99, and you don’t need any fancy browser plugins or a thick client. The disadvantage is that it’s basically a distributed slideshow application; you can’t share your desktop or other windows with the other participants, or do anything fancy that you wouldn’t be able to do outside of a PowerPoint framework.

Still, I think there is promise for the World After PowerPoint. Thanks to John Udell for writing about these technologies.

the risks of “outsourcing to the web”

It seems that within the last few months, much has been made of so-called Web 2.0 sites. The fact that it is impossible to even attribute a noun to describe “Web 2.0” — is it a paradigm? a metaphor? a meme? (ugh) — should be enough to convince you that “Web 2.0” is just the latest buzzword to describe webpage design and innovation, but I digress. My objective today isn’t to complain about the use of the term Web 2.0, but to talk about one alarming aspect of these new Web 2.0 sites: the fundamental outsourcing of your private data storage to commercial entities.

Many of the new, highly interactive web properties like Flickr, Gmail, Friendster, and the like, use sophisticated technology — at least in the context of the Internet — to make their sites operate much like thick clients (traditional software running on your desktop computer). One of the primary technologies in use, of course, is Asynchronous Javascript and XML (AJAX), which creates the illusion that you are interacting with a web application without a connection to the server. This has made it possible to create web-side lookalikes of traditional desktop applications, such as e-mail (e.g. GMail), bookmark managers (e.g. del.icio.us), CRM applications (e.g. Salesforce) and even office applications like a word processor (e.g. Writely). A number of factors have made these sites particularly attractive to the end user:

  • ease of use
  • no need to worry about data storage on one’s local (volatile) storage device
  • no need to be maintain one’s locally-installed software, apply security/bugfixes, etc.
  • ability to quickly “upgrade” to new vendor-released versions since the application is centrally-managed

We can expect the adoption rate of these applications to increase both as more users discover their utility, and as more such applications are created.

In the move from the desktop to the web (the so-called “outsourcing to the web”), many issues such as privacy, data retention, etc. are frequently glossed over or simply not recognized by end users as being important. It is difficult for many users to understand even one site’s privacy policy, never mind five or ten. The perplexing question of “How is the data from my personal documents such as e-mails, letters, word-processing files, etc. being used?” may not be adequately answered even by privacy policies, because such privacy policies often cover only the information being stored itself, not any derivative works. By derivative works, I mean that statistical data about your e-mail or Writely documents might be used to target ads to you, or the aggregate statistics of word frequency amongst all Writely users might be shared or sold to marketers for data mining purposes. As one marketing guru said to me recently, the possibilities for data mining are endless (the exact words he used were “we data mine the hell out of things!”)

Another big concern is that many of the applications currently being created revolve around Google in some way. Not only has Google been the primary developer of many rich web applications, with products such as Google Maps, Google Desktop, Google Page Creator, Writely and of course, GMail, but many other developers have taken advantage of Google’s open API to create their own derivative applications (such as Frappr). What happens when Google decides to use the stored user data in new ways? Or what if Google, formerly seen as the benevolent hacker’s workshop, changes its tune and becomes more corporate and controlling like Microsoft? The concentration of power around one publicly-traded corporation should be alarming to any consumer. (I could repeat the same arguments about Yahoo, given attempt to compete in the same space as Google, but by acquisition — their purchases of del.icio.us, upcoming.org, and so on being prime examples of this strategy.)

What is the solution? As I pointed out already, I expect the adoption of such rich applications to increase, not decrease; not only because of their technological merits, but because they frequently build online communities that are appealing to users (and marketers, of course). However, I think that any user who values his or her privacy and finds the notion of data mining based on one’s personal correspondence to be uncomfortable would do well to continue using traditional desktop software to manage these.

trying out Asterisk@Home

I’ve recently been getting into voice-over-IP telephony, both due to my dayjob (where I’m now responsible for managing a very expensive but full-featured Cisco VoIP System) and my long-time desire to build a hobbyist PBX at home using Asterisk. I’d set up Asterisk under a FreeBSD 5.4 server some months ago, but got as far as installing a demo dialplan before I got distracted. This time around I decided to give Asterisk@Home a spin, because it bundles many common Asterisk add-ons and features into an easy-to-install ISO backed by CentOS 4.x. (For those who don’t know what CentOS is, it’s basically a straight recompile of RedHat’s popular Enterprise Linux product, and as such, available for free.) Continue reading

Internet nostalgia

I’ve been “on the Internet” (a term which, by the way, makes no sense) for about twelve or thirteen years now, and although this makes me a young ‘un from the perspective of those folks who invented TCP/IP, I still remember enough of the days before the World Wide Web to have some nostalgia for the way the Internet used to be. I bring this up because I just came across a little notebook in which I used to write down relevant URIs and other ephemera. Some of the gems in there:

  • a listing of my favourite Archie servers
  • directions for accessing the e-mail anonymizer that used to live at anon@anon.penet.fi
  • my old FidoNet e-mail address (julian.dunn@f11.n241.z1.fidonet.org)
  • logon information for my various FreeNet accounts such as the now-defunct Cleveland FreeNet
  • a listing of Gopher servers upon which one would have found current price quotes for Macintoshes
  • various sundry BBS telephone numbers that I’m sure are all out of service by now
  • information on how to get to my Dad’s VAX account via Datapac (anyone know if Bell Canada is still operating Datapac these days?)

I was surprised to not only find that I’d written down information for accessing the Internet Oracle (a/k/a rec.humour.oracle) but that the Oracle is still going strong.

Anyone else have old Internet memories they’d like to contribute?

connecting Tomcat and Apache

Please bear with me while I engage in the following diatribe about: “Why Is It So Darn Difficult to Connect Apache and Tomcat?” Anyone who has worked with mod_jk/mod_jk2 and its ilk know that connecting Apache and Tomcat over AJP (Apache JServ Protocol) is probably one of the more difficult server configuration tasks out there.

A little history: When Tomcat was still Apache/JServ (way back in the day), there was a mod_jserv that managed the AJP pipe between the front-end HTTP server (i.e. Apache HTTPD 1.x) and the back-end application server (JServ). Eventually, this evolved into mod_jk for the first series of Tomcat application servers.

All well and good, and the configuration is fairly straightforward, up to the point of actually talking to your web application: the dreaded JkMount syntax. The example directive looks like this:


JkMount /examples/* worker1

There are a number of problems with this syntax. First, it unnecessarily ties the paths that you use to access the web application from the backend with those that you use on the front-end. So for instance, I have no way to specify that I actually want to map “/julians_examples” on the front-end to “/examples” on the backend. Want to do that? Sorry — time to institute some kind of mod_rewrite hackery. Secondly, the “*” doesn’t mean what you think it means! It’s not a wildcard, so you can’t selectively map stuff; for instance, I can’t say JkMount /examples/foo* to map all resources starting with foo to the application server. This will tell AJP to look for a resource matching, literally, “/examples/foo*” and of course will fail as there’s no resource with that asterisk in there.

Ok, so along comes mod_jk2, which is supposed to be a refactoring of mod_jk. It has certain improvements, such as the ability to talk over a shared UNIX socket (instead of using a network-based AJP protocol), the configuration is simplified again, etc. But again, the web application mapping problem is prevalent! The syntax to map the front-end to the back-end is like this:


<Location "/foo">
JkUriSet worker ajp13:backend-server:8009
</Location>

ARGH! Still no way to specify that the front-end /foo should be mapped to some other back-end path!

Why is this so difficult? And why do we have so many connector projects (like mod_webapp) that have died? A few years ago, I looked into mod_webapp‘s WARP protocol and it seemed to be a breath of fresh air over this antique AJP13 protocol. What happened to it?

I should mention as a postscript that maybe, maybe, in HTTPd 2.1, the new mod_proxy_ajp will solve my problems. Its syntax looks like this:


<Location /examples/>
ProxyPass ajp://backend-server:8009/examples/
</Location>

Wow! Finally a way to say that I should map something on the front-end to a path that could possibly be different on the back-end.

I don’t understand why it’s taken us ten years (and counting) to get to this state. Is it just me that thinks this is totally bonkers?

As a footnote to this, I get the sense from the documentation that AJP13 is a very poorly documented protocol, and is still around simply due to momentum. Read these statements from the documentation, for example:

"This describes the Apache JServ Protocol version 1.3 (hereafter ajp13 ). There is, apparently, no current documentation of how the protocol works. "
"In general, the C code which Shachor wrote is very clean and comprehensible (if almost totally undocumented)."
"I also don’t know why certain design decisions were made. Where I was able, I’ve offered some possible justifications for certain choices, but those are only my guesses."

Undocumented code? Unjustifiable design decisions? Little current documentation about how the protocol works?

It’s things like this that are killing us in the Open Source community. I find it pretty difficult to pitch Tomcat as a worthy alternative to IBM WebSphere or BEA WebLogic when we have this kind of cruft sitting around, pretending to be an "enterprise-worthy" solution.

something lost in translation

Company name of the offender removed for their protection (they’re already in Chapter 11 – no need to help them along here)

Dear valued customer,


On the 25.November 2003, 11:35 we had a route leakage. Due to a mistake on the -------- backbone, we anounced to many routes.


For this reason many peering-sessions were closed automaticly. Issue is resolved. Situation is going normalized.

Regards

[deleted]

Heh. "Situation is going normalized" — I have to use that one in resolving RT tickets.

if it doesn’t have a www it’s not a website?

What is it with website operators who think that if it doesn’t have a “www” in front of it, it’s not a website?

I mean, how hard is it to replace:


ServerName www.froufrou.net

with:


ServerName froufrou.net
ServerAlias www.froufrou.net

?

I mean it’s not like they’re running Insecure Information Services or something. It can’t be that hard to do.

Of course, mind you, they are running a version of Apache with known security holes, a version of PHP with known security holes, and a version of OpenSSL with known security holes.

koremutake

So I’ve been trying to figure out what this koremutake thing means — this algorithm that the shorl.com people are using to make their shorter URLs easy to memorize. I figure that "koremutake" is the representation of some significant number using the aforementioned algorithm.

So I tooled around and tried to figure out what the significance of the number is. I see that using the supplied syllable-to-number chart provided, "koremutake" (or more accurately KO RE MU TA KE is really 39 67 52 78 37. Does this number mean anything to anyone?