In The Armchair

Linux Mint 14.1

Posted in Computers by Armchair Guy on December 6, 2012

I upgraded the memory on my old Lenovo T61 recently, and thought it was a good time to upgrade Linux as well.

When I began looking for candidate distributions, one of the big problems with Linux became apparent: the community struggles to hold on to gains that it has made.  Excellent, painstakingly developed features are often discarded in favor of a “refresh” or new direction.  Many times, it isn’t clear at all that the refresh accomplishes anything.

This happened to Ubuntu, the distro I’ve been using for several years now, with their decision to develop a completely different UI (Unity).  I decided to install a less deviant distro, and went with Linux Mint 14.1.  Not too big a step, since Mint 14.1 is based on Ubuntu 12.10. Linux mint is pretty good, but like most linuxes is a little rough around the edges.  Here are my experiences, which I’ll update as time goes by.

The good:

• Except for one minor hiccup, the install process was excellently simple and effortless.
• Lenovo dock integration works with zero tweaking (when placed in the dock, the display switches to the external monitor, and vice-versa).  This is very important.  (This completely stopped working… I’m trying to figure out why and how to make it work again. Update: It seems related to this previous post of mine.)
• Videos play without too much stutter or jerkiness, even in full screen mode.
• I don’t have to do anything to get sound to work. (See below for a caveat)
• The UIs for various settings are very smooth and intuitive without unnecessary clutter.
• The user interface (panel, menu, desktop) is quite smooth and sleek, most things are placed in intuitive locations, and it is easy to get work done without the UI getting in my way.
• I had existing home directories for a small number of user accounts.  When I mounted those home directories and added users with the same names as before, the userids were set properly and users were able to immediately begin using their old home directories.  I am not sure whether this was just a coincidence (did I recreate user accounts in the same order that I had done previously, and hence get the same sequence of userids? If I had changed the order, would everything have been messed up?).  I was concerned about this, and it’s great that it worked so smoothly.

• Why create a DVD-size install disk (800-odd MB)?  I didn’t have a USB stick handy, and it was pure luck that I had a blank DVD lying around.  I’d expect CD-size install disks to work better.  But this is a minor problem.  I imagine most people would use a USB stick.
• The install process is excellently simple.  However, one part of the process gave me some nervous moments.  I have a complicated partitioning scheme.  When I was setting up partitions manually, the “format” option for some partitions was grey, while it was white for others and had an “X” for yet others.  I wasn’t sure what the grey meant and it wasn’t explained anywhere, leading to nervousness because I wasn’t sure whether my data-filled partitions would be formatted.  In the end, they weren’t formatted and it all turned out ok, but I’m still no wiser as to what the grey formatting box means.
• When logging in, it says something about Run XClient Script.  I installed the Cinnamon version and know about MATE, but what is this XClient Script?  Need to stop referring to things people are unlikely to know about without an explanation.
• No switch user button in the menu.  I have to lock the screen to get a switch user button.
• Right after installation, none of the package management software (including synaptic package manager) would start because of a malformed line in the sources list.  I had to manually edit the file to fix this.
• X is unstable.  Sometimes I log into an account and get blank white squares instead of icons or letters, and various parts of the screen are blanked out or garbled.  Installing the nvidia drivers appears to have fixed this.
• Nowhere in the manuals or release notes is installation of nvidia drivers mentioned.  You just had to know to install them.
• Synaptic still hasn’t gained the ability to install selected packages in the background while you continue to browse other packages.
• The login/logout system is generally unstable.
• If I switch users or log in and out 2-3 times, the computer crashes and I have to reboot.
• Even when it works, switching users results in some peculiar behavior.  The screen blanks out, I see a blinking cursor, then the screen blinks a couple of times before the login screen appears.  Sometimes there’s a text login that appears for several seconds before the graphical login window comes back.  An nvidia splash screen also becomes visible sometimes.
• The login system doesn’t play nice with the docking system.  If I switch users, the new login screen disappears from the external screen and only appears on the laptop screen (which I have to open while it’s in the dock to proceed).  After logging in while docked, attempting to switch users leads to peculiar bugs, including the inability to view programs started, even though they show up on the taskbar (there’s just no window for the program).
• After installing software, it’s sometimes necessary to log out and back in for the menu search to detect it.  (Other times it is detected right away.)
• They chose to include a crippled Add Users program.  There is already a program that lets you add users as well as set various permissions through group membership (like the ability to mount CDs or use a printer or VirtualBox).  In Mint they inexplicably discarded the feature-rich version in favor of a program that only allows you to add or delete a user.
• The alternative package management tool (mintinstall) is a frustrating mix of good and bad.  It has this great feature where you can begin installing some packages in the background while continuing to browse other packages (I can’t believe synaptic still can’t do this).
• However, it isn’t possible to quickly select or queue multiple packages for installation — there’s no right-click package selection.  To select a package, you have to double-click on it and then click an “install” button.  To then select another package, you have to hit a “back” button, double click on another package, and click the install button.
• Also, and this is quite important, once an installation is started there seems to be no way to interrupt it or cancel.  The cancel button doesn’t work.  If it is closed during installation, the tool closes the UI but continues to install packages in the background.  During this time it locks the package management system and there is no way to communicate with it.  So I ended up waiting a long time for it to finish downloading and installing packages I had changed my mind about.
• There still doesn’t seem to be any way to mirror the desktop over DLNA.  And still no way to stream PulseAudio sound output over DLNA or AirPlay.  There are no DLNA control points available that work with any of the music players.  There are supposed to be some programs that enable some of these, like rygel, xmms2-plugin-airplay, pulseaudio-module-raop, and a totem airplay plugin.  None of them works.  C’mon — it’s 2013, for goodness’s sake!  Even my Android cell phone can do these basic things.
• Caveat Sound seemed to work out of the box.  Then it just stopped working.  I can no longer see the local sound device.  The only device that sound can be sent to is “Dummy Output”, which seems to be like /dev/null for sound.  That is, I can no longer play sound at all.  This problem went away after rebooting, but it’s quite unclear why it occurred and seems likely to recur in the future.
• Ubuntu One isn’t installed by default.  I don’t buy very much music, but unless it’s much more expensive or lower quality than 320 kbps I prefer to buy on Ubuntu One, as a way of supporting Linux. This is minor (Ubuntu One is easy to install), but since Mint is based on Ubuntu, I feel they should have included Ubuntu One.

Math in R plots

Posted in Computers by Armchair Guy on April 5, 2011

R provides a way (see ?mathplot) to insert math into titles and labels in plots.  An example: plot(1, main=expression(S[A])).  This will create an S with a subscript A ($S_A$) in the title of the plot.

But what if you have a variable called x, and you want $S_A$ and the value of x in the title?  For example, if the value of x is 3, you want $S_A = 3$ to appear.

I’m sure there’s a simpler solution, but here’s the simplest one I’ve got:

1. First, note that it would suffice to type in plot(1, main = expression(paste(S[A], ” = “, 3))).  Of course, we want the value of x there, no matter what it is — not just 3.  If we try plot(1, main = expression(paste(S[A], ” = “, x))), that will result in $S_A = x$ appearing in the title, not what we want.
2. The solution is to create the string we would have typed if we knew the value of x.  We do this like this: s <- paste(“plot(1, main = expression(paste(S[A], \” = \”, “, x, “)))”). If we now print the string s, it will show “plot(1, main = expression(paste(S[A], \” = \”,  3 )))” (if the value of x is 3).
3. Now, we “run the string”: eval(parse(text = s)).

There are some more complicated but flexible solutions, like integrating postscript output from latex into R graphs (using psfrag).

Watson and the Singularity – IV

Posted in Computers by Armchair Guy on February 17, 2011

On the last day, Watson was not nearly as dominant as on the second day, but won a big packet on the Final Jeopardy question to take its total winnings to much higher levels than either of its contestants.  For a large part of the third day, Ken Jennings was in the lead.

Watson uses the correctness of previous answers to try to understand what a category means.  For example, in a “Name the Decade” category, Watson wasn’t sure what the category name meant — so it didn’t know that it could restrict its answers to the set {1900, 1910, 1920, …, 1990, 2000}.  When it is unsure in such a manner, Watson uses previous correct answers in that category — possibly by its competitors — to narrow down what the category means.  It was clear on the third day that Watson didn’t understand what certain categories meant, even after observing opponents’ correct responses.  This meant it did poorly on those categories as a whole.

Watson’s natural language processing, I think, is tailored to the task at hand — winning Jeopardy.  Like I said before, it doesn’t understand sentences the same way a human would.  While watching the show, I found that many of the answers were found in the intersections of two or more sets, but Watson didn’t identify all of the sets.

I would classify Watson as a mild sub-Singularity event at this point.  If indeed programs like Watson proliferate the way chess programs have — if Watson clones become much more powerful and lightweight enough to run on the computer as personal assistants, perhaps with some help from the cloud — we will be on our way to real artificial intelligence.  Sequential improvements in such programs will eventually lead to super-human intelligence, much as the Singularity gurus predicted.  Eventually, APIs for NLP and this type of reasoning might become commoditized — unless companies like Google prefer to provide access APIs only, and keep all the computation hidden on their servers.

First, Watson needed 2800 processors to answer questions one at a time.  The technology that Google has may or may not be equally advanced, but perhaps doesn’t scale up to allow answers for millions of questions yet.

Second, this is a card in Google’s hand that it doesn’t want to show unless necessary.  If a competitor (mostly Bing at this stage) appears to be making significant inroads into its search space, it can add this feature to jump ahead, so it’s insurance.  Revealing everything would just provide Microsoft with a “copy this!” blueprint.

Watson and the Singularity – III

Posted in Computers by Armchair Guy on February 16, 2011

On day 2, Watson comprehensively outscored his human opponents.

To some extent, it seems Watson is at an advantage because his pneumatic button-pressing system can react faster than any human possibly could.  This severely affected Ken Jennings, who obviously had some of the answers and showed frustration at never being able to get to the buzzer first, shaking his head on occasion.

Perhaps a more fair way to assess Watson’s intelligence (as opposed to his button pushing prowess) is to adjust Watson’s button presser to be more commensurate with the pressing rate of human nerve systems.

Although Watson is doing great, it is becoming more apparent that Watson doesn’t understand the nuances of language in the clues as well as a human could.  There’s a document here (PDF) detailing some of Watson’s programming.

Watson and the Singularity – II

Posted in Computers by Armchair Guy on February 15, 2011

So, I watched the first part (of three) of the IBM Watson Jeopardy challenge.  So far, Brad Rutter and Watson are tied at first place, with Ken Jennings somewhat behind.  I was interested in getting a sense for how Watson “thinks”.  One of the things I tried to do was get a sense of the extent to which Watson is “understanding” human language.  At this point the answer seems to be “not very well”.

Of course, it’s hard to glean much just watching a TV show, but it seems as if Watson isn’t quite understanding language the way we do.  If a clue indicates the answer is a member of two sets, for example, Watson sometimes seems to ignore the second set.  An example (paraphrased): This word can mean the bend in the elbow and also a thief.  Watson’s best guess was “knee”, which has nothing to do with the second set (words that can mean “thief”) though it does have something to do with the first set (words that can mean the bend in the elbow).  The right answer was “crook”.

Watson seems to do superlatively well when there are unique phrases to be matched, i.e. when the clue contains phrases that are pertinent only to the answer and wouldn’t occur anywhere else.  Perhaps this is not surprising at all.

It’s possible Watson’s thought processes are a bunch of shortcuts completely unlike ours.  It may for example simply be finding a bunch of words and phrases based on associations with keyphrases in the clue and then ranking them.  Rather than searching for words/phrases in the sets that the clue is asking for.

Perhaps the right test is this: is it easy to add a subroutine to Watson that would allow it to rephrase the clue in several simpler English sentences?  I don’t know.  So I’m still unsure whether to call the creation of Watson a Singularity defining moment.

Nokia and Microsoft?!

Posted in Computers by Armchair Guy on February 11, 2011

Nokia, one of Open Source’s biggest advocates and sources of strength, has practically merged with Microsoft, Open Source’s biggest enemy and saboteur.  The agreement goes beyond simple cooperation.  Nokia is killing Meego, an important open source initiative.

This has been on the horizon for quite a while, ever since Nokia hired long-time Microsoft insider Elop as its CEO, and intensifying with a leaked internal memo Elop supposedly sent to Nokia employees.

This probably will help Nokia in the long run, but it fundamentally changes the company’s character.  This is a sellout by a biased CEO.  I was with Nokia so far, but I’m switching to Android as soon as I can.

EDIT:

Watson and the Singularity

Posted in Computers by Armchair Guy on February 9, 2011

The Singularity is a recurring theme in artificial intelligence and in science fiction.

It refers to an event where a computer achieves certain significant feats of intelligence.  Different authors use it slightly differently, or use different terms.  Sometimes the word means achieving “self-awareness” (whatever that is).  Sometimes it means the creation of a computer that is as smart as a human.  Sometimes it means the addition of a technology that results in a massive increase in ability (the new state is usually one of higher-than-human intelligence).  Some authors speak of multiple singularities — computers caught in an ever-rising spiral of super-intelligence.

The AI promises of the 1980s turned out to be too grand and ill-founded to be realistic.  People thought then that they could program intelligence by programming the minutiae of thought.  This turned out to be a vastly bigger task than anticipated.

For a while, it seemed there were things humans could do that computers would never be able to.  One of the biggest, and most visible, blows was Deep Blue’s defeat of Garry Kasparov.  Today, software (Rybka, Glaurung, Stockfish) running on the ordinary desktop computer will easily defeat the best human chess players.  But Deep Blue and its younger cousins don’t really have intelligence, at least not what we usually mean by it.  They’re “on-rails”, and can do very restricted things on very restricted input sets.

But all this doesn’t mean man-made intelligence is impossible.  Instead of programming intelligence, we can perhaps include techniques like evolving it or learning it.  Is that what Watson has done?

Watson is a massively parallel supercomputer, developed by IBM, that will participate on Jeopardy against Ken Jennings and Brad Rutter.  Watson can understand a variety of language nuances and sift through its massive database to attempt to find answers.  Given the diversity of subject matter as well as question phrasing on Jeopardy, this is quite a feat.  Does this qualify as intelligence?

It’s not clear to me how Watson works, but the bits I’ve gleaned indicate that it is a collection of a number of hand-written subroutines that interact in carefully human-tuned ways.  There’s no automated evolution or search of algorithms to try to make it better.  In that sense, it is still algorithmic, much like Deep Blue.  But Watson’s algorithm is much more complicated and chaotic than Deep Blue’s.  It sounds complex enough that I view it as a limited form of intelligence.

Perhaps we are hitting the first technological singularity, although it’s not the single explosive moment some have imagined.

Watson might have been a good learning experience — the engineers at IBM must have figured out a lot about how to make computers think.  But it still lacks the essential ingredient that sci-fi authors fantasize about.  We still don’t have automated techniques to take a given computer and make it better.  That is, we don’t know how to make computers improve other computers.  That would be the a real Singularity.

LaTeX Forward PDF Search with Emacs

Posted in Computers by Armchair Guy on November 15, 2010

I wrote a blog post on implementing inverse PDF search with okular and emacs here.  This post is about the reverse: forward search.  That is, with point at any position in the LaTeX source while editing in emacs, a keystroke causes okular to center the corresponding portion of the PDF in its viewable area. A nice overview of LaTeX synchronization can be found here.

In the comments on the inverse search post, B. Slade suggested the procedure at http://www.bleedingmind.com/index.php/2010/06/17/synctex-on-linux-and-mac-os-x-with-emacs for forward search.  However, the instructions there require installation of AuCTeX version 11.86, while the latest version in the Ubuntu 10.04 repositories have is 11.85.

I managed to kludge a solution for the AuCTeX shipping with Ubuntu 10.04 by slightly modifying some emacs code I found here: http://www.mail-archive.com/okular-devel@kde.org/msg04913.html. Most of the work was already done by Mark Altern and earlier authors; the only change was to tell it to use the .pdf instead of the .dvi.

A note of caution.  Forward search with okular is not very convenient.  This is because okular redisplays the PDF document every time you do a forward search — with a side “contents” pane and also repositioning the PDF.  So if you’ve removed the space-hogging contents pane (by pressing F7 twice) and zoomed and positioned the PDF to your liking, doing a forward search will undo all of that.  You’ll have to remove the contents pane again and re-zoom and re-position.  This severely limits the usefulness of forward search.  There doesn’t seem to be a way around this for now.

Anyway, here are the steps.

1. Follow instructions here to set up inverse search.
2. Source for okular-search.el is at the end of this post.  Copy it and put it in a file called okular-search.el.

(require 'okular-search)
(add-hook 'LaTeX-mode-hook (lambda () (local-set-key "\C-x\C-j" 'okular-jump-to-line)))
(add-hook 'tex-mode-hook (lambda () (local-set-key "\C-x\C-j" 'okular-jump-to-line)))


That’s it. Press C-x C-j to open a new okular viewing window. Subsequent presses of C-x C-j will reposition the PDF in that okular window to correspond to whatever’s at point in emacs.

Here’s the code for okular-search.el:


;;; (X)Emacs frontend to forward search with kdvi. See the section on
;;; FORWARD SEARCH in the kdvi manual for more information on forward
;;; search, and for an explanation how to use this script. This script
;;; is a modified version of the script "xdvi-search.el" by Stefan
;;; Ulrich, version 2000/03/13. The
;;; modifications were performed by Stefan Kebekus
;;; . Tested with Emacs 20.7.1 and Xemacs 21.4.
;;;
;;; This program is free software; you can redistribute it and/or
;;; modify it under the terms of the GNU General Public License as
;;; published by the Free Software Foundation; either version 2 of the
;;;
;;; This program is distributed in the hope that it will be useful,
;;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
;;; General Public License for more details.
;;;
;;; You should have received a copy of the GNU General Public License
;;; along with this program; if not, write to the Free Software
;;; Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
;;; 02110-1301, USA.
;;;
;;; of okular.
;;;

(defvar okular-script "okular"
"*Name of start script for okular.")

(defun okular-jump-to-line ()
"Call okular-script to perform a forward search' for current file and line number.
See contents of okular-script for details.
If AucTeX is used, the value of TeX-master-file is used as filename
for the master .dvi file; else, the return value of okular-master-file-name
is used (which see)."
(interactive)
(save-excursion
(save-restriction
(widen)
(beginning-of-line 1)
(let* (;;; current line in file, as found in the documentation
;;; of emacs. Slightly non-intuitive.
(current-line (format "%d" (+ 1 (count-lines (point-min) (point)))))
;;; name of the main' .tex file, which is also used as .dvi basename:
(master-file (expand-file-name (if (fboundp 'TeX-master-file)
(TeX-master-file t)
(okular-get-masterfile (okular-master-file-name)))))
;;; .dvi file name:
(pdf-file (concat (file-name-sans-extension master-file) ".pdf"))
;;; current source file name.
(filename (expand-file-name (buffer-file-name))))
(start-process "okular"
"okular-output" "okular" ;;; src-args
;;; args for -sourceposition:
"--unique" (concat "file:" pdf-file "#src:" current-line filename)
)))))

(defun okular-get-masterfile (file)
"Small helper function for AucTeX compatibility.
Converts the special value t that TeX-master might be set to
into a real file name."
(if (eq file t)
(buffer-file-name)
file))

(defun okular-master-file-name ()
"Emulate AucTeX's TeX-master-file function.
Partly copied from tex.el's TeX-master-file and TeX-add-local-master."
(if (boundp 'TeX-master)
TeX-master
(let ((master-file (read-file-name "Master file (default this file): ")))
(if (y-or-n-p "Save info as local variable? ")
(progn
(goto-char (point-max))
(if (re-search-backward "^\$$[^\n]+\$$Local Variables:" nil t)
(let* ((prefix (if (match-beginning 1)
(buffer-substring (match-beginning 1) (match-end 1))
""))
(start (point)))
(re-search-forward (regexp-quote (concat prefix "End:")) nil t)
(if (re-search-backward (regexp-quote (concat prefix "TeX-master")) start t)
;;; if TeX-master line exists already, replace it
(progn
(beginning-of-line 1)
(kill-line 1))
(beginning-of-line 1))
(insert prefix "TeX-master: " (prin1-to-string master-file) "\n"))
(insert "\n%%% Local Variables: "
;;; mode is of little use without AucTeX ...
;;;		      "\n%%% mode: " (substring (symbol-name major-mode) 0 -5)
"\n%%% TeX-master: " (prin1-to-string master-file)
"\n%%% End: \n"))
(save-buffer)
(message "(local variables written.)"))
(message "(nothing written.)"))
(set (make-local-variable 'TeX-master) master-file))))

(provide 'okular-search)


LaTeX Inverse PDF Search with Emacs

Posted in Computers by Armchair Guy on September 2, 2010

Latex is such a time hog that I’m always in search of ways to improve my efficiency. My editor of choice is emacs. I’ve tried other editors but it usually turns out that there is some “tail” feature that I can’t find in any other editor. By “tail” feature I mean a feature that is used either very rarely or by very few people.

I’ve tried LyX, which is described as a WYSIWYM (What You See Is What You Mean) editor. It’s actually quite brilliant, and I do use it for simple MS Word-style documents. I’d use it in conjunction with emacs, switching back and forth between LyX and LaTeX source code editing, if it were not for the fact that it messes up LaTeX source code big time (all source code formatting is lost, making it pretty unreadable).

So I stick to emacs. One of the problems with a non-WYSIWYG editor for a language like LaTeX is it is hard to spot what you’re looking at. If you’re looking at a PDF file, it’s really hard to find the tex code that corresponds to what you’re seeing in the PDF. This is where Inverse PDF Search comes in.

What inverse PDF search does is allow you to click on a location in the PDF file and be transported to the corresponding spot in the LaTeX source. I find this tremendously useful.

Under Ubuntu 10.04 “Lucid Lynx”, this works almost out of the box, with a little bit of work. Here are the steps.

1. First, install Okular, the KDE PDF viewer. This may require a huge number of KDE libraries. Evince is nice but doesn’t support inverse PDF search yet.
2. In okular, go to Settings > Configure Okular > Editor and change the editor to “emacs client”, and the command to “emacsclient -a emacs –no-wait +%l %f“.
3. Make sure you have Emacs 23. This is the default in Ubuntu 10.04.
4. We have to start emacs in server mode so that okular can talk to it. Add this line somewhere in your .emacs file: (server-start) The next time emacs is started, it will be in server mode.
5. When emacs compiles using latex, it has to include “source specials”. Roughly, this is an index connecting every position in the PDF file to a line number in the source file. To create the index, a latex package called synctex must be installed. Under Ubuntu 10.04 this is included in the package texlive-extra-utils, so make sure that’s installed.
6. To tell emacs to always use PDF (i.e. compile with pdflatex instead of latex) insert the line (add-hook ‘LaTeX-mode-hook ‘TeX-PDF-mode) into your .emacs.  This step is optional if you don’t want to automatically pick PDF each time, but everything in this post pertains to PDF.
7. Tell emacs to compile using source specials by adding the following line to your .emacs: ‘(LaTeX-command “latex -synctex=1″) Alternatively, when editing a tex file, go to the menu LaTeX > Customize AUCTeX > Extend this menu; then LaTeX > Customize AUCTeX > TeX Command > LaTeX Command, then change “latex” to “latex -synctex=1” and click “Save for future sessions”.

Open Source versus Innovation?

Posted in Computers by Armchair Guy on October 9, 2008

I’m reading The World is Flat by Thomas Friedman, and he asks a question about open source that really made me think. What is the motivation for innovation if everybody gives their innovations away for free and nobody gets paid for their innovations, which is what open source seems to suggest?

I don’t have a good answer, but it seems to work. As Friedman himself points out, many important innovations have come out of open source, including the Apache web browser. I would go so far as to say that most innovations in the field have come from not-for-profit efforts. Google’s entire search infrastructure runs on Linux; Amazon’s entire web presence runs on Apache. It’s as real as it gets.

The question is tied to (and perhaps motivated by) statements from Microsoft bigwigs. Here is one that Friedman quotes (the inserts are his):

You need capitalism [to drive innovation.] To have [a movement] that says innovation does not deserve an economic reward is contrary to where the world is going. When I talk to the Chinese, they dream of starting a company. They are not thinking, ‘I will be a barber during the day and do free software at night.’… When you have a security crisis in your [software] system, you don’t want to say, `Where is the guy in the barbershop?’ — Bill Gates

But Bill Gates is hardly in a position to talk of innovation. Microsoft has not made any significant technical innovations in the last 10 years. Windows Vista’s UI is (feature-wise) just a bloated version of Windows 95, with some bling. Microsoft’s innovations are almost entirely on the business end: it has figured out effective ways to stifle innovation by competitors. So Bill Gates talking about what drives innovation is like a thief lecturing about honesty.

Gates’ comment about security is even more perplexing in light of the extensively poor track record Microsoft has in security. Open source alternatives are far more secure in every way than anything Microsoft has. Maybe the reason you don’t want the guy in the barbershop is if you know there’s something wrong with the security, it’s probably the Microsoft guy who’s responsible.

But what Gates says is not really relevant here. None of this actually answers the question: how can you justify, theoretically, the claim that innovation can be sustainably executed within Open Source frameworks?

I don’t have a good answer.

Tagged with: ,

PGF and TikZ

Posted in Computers by Armchair Guy on September 26, 2008

I guess I’m just completely out of touch, but there’s an excellent package that allows you to draw sophisticated graphics within LaTeX (using a graphics programming language) I hadn’t heard of till recently.

It’s called PGF, and the component you use in your LaTeX/TeX source code is called TikZ. You enter simple LaTeX-style commands to tell it what to draw, in an environment right in your (La)TeX code, and it does the job for you. The excellent user manual can be found here, including instructions for installing and a tutorial. (If you’re a Ubuntu user, of course, there’s already a package you can install with a few clicks.) Great examples can be found here. Wow! What a package.

Tagged with: