In the progression of things, I'm generally in favor of more technology as opposed to less. Or more like: cleverer, more useful technology as opposed to low-tech solutions. This approach, however, does break down for me in a few cases for me that I couldn't figure out to a satisfactory solution, yet. I wager, these breakdowns occur because I'm not "young enough" anymore so I had extensive experience of the low-tech even if I can clearly see and evaluate the advantage of the high-tech available here and now. Also, I wager, besides experience, emotions cloud my judgement. So I think I want to talk a bit about electronic and paper books, and electronic and paper journals.
By: joelmontes@flickr
Books: Dead Tree or Bits
As a little kid practically I grew up in a library and I've always been reading a lot. I bought a lot of books: not the most among the people I know, but give it some time and I'll have a pretty decent library at home. Provided I don't move continents and pass on that library to a second-hand bookshop, again.
In Taiwan it is slightly more difficult to find books in a language I can read (in most likelihood it will be in English) than back in good ol' Hungary or England. Not impossible, there are plenty good ones, but two selves in a 2 or 3 level bookstore is not the greatest. I turned to e-books, then, and started to read a lot on my... phone. Using Aldiko on a HTC Desire is pretty good - surprisingly good, I'd say, given the small screen size. And since I bring it with me everywhere, it is very practical - for a few hours before the constant screen usage devours the battery. And it is only really good for text-only novels, for any of my research papers or comics it is... well, comical. Thus I was thinking of buying a Kindle DX. Not the smallest kindle, which I've seen and it's great, but wouldn't solve my research-paper problems.
But then I was really thinking: it's such a very different feeling to have a library of paper books and electronic ones. I can have tens of thousands of books on a hard drive easily and will never be able to really read all of them. Buying the same number of paper books would give me a stop, do I really want to read every single one or just too lazy to be selective? The whole experience is completely different, and somehow the practicality of the ebooks does not compensate me for the loss of control and inner "fuzzy feeling" (yeah, that's how well I can put my finger on it).
Resolution? The best would probably be if every book could be a bundle: paper + ebook for some premium on the price. Not double, but maybe 10%. Maybe even 25% I could justify and would opt for the bundle instead of either of them on their own. Provided the ebook is DRM free, of course. But until then, I think I will just keep my paper copies no matter how large they are....
And as for the Kindle... I guess I will procrastinate a little while longer (since I don't need it, "merely" want it) and will buy it on an impulse later.
Journals: stored in attic or in backup
Daily journals also have a lot of emotions attached. Paper journals are really good to write, and sometimes even to read back. They tell a lot about the person, and give a lot of freedom how to use them. On the other hand, typing being my main way of putting words down, I don't trust my hands (and the cramps in them) to produce something consistently legible. Searching for things is also near impossible unless reading through a lot of material.
On the other hand, I set up a personal wiki not long ago, and it's just a matter of setting up the right pages and there am I, typing away the days events and observations, with linking to the relevant people and things, with search, with backup, with legibility.... and somehow with less involvement. I don't think one could possible write a journal on paper and then transfer it to the computer, at least I don't think there's good enough handwriting recognition to handle that yet.
Resolution? In the meantime I started to write it in the wiki, thinking that things written down in any format are better than not written down at all. And hoping that the sad feeling that by typing instead of writing I lose something, will diminish with time...
Afterthought
It is telling, however, that there are so many things that I don't mind being changed. Don't mind internet radio instead of normal radio or live performance as my main source of music. No matter how close journals and daily planners are, I don't mind using just Google Calendar for all my events, even if that means I've lost a lot of colorful notes that I had before in my planners. Don't mind sending emails most of the time even for close friends, and only the occasional snail-mail.
Maybe I'm just making things too difficult for myself. But I don't want to lose all the personality I have in exchange of convenience. At least there has to be a threshold.
Tuesday, 18 January 2011
Monday, 17 January 2011
Facebook Hacker Cup Round 1A, that wasn't
From earlier blog posts it is clear that I like programming contests. Since I spend so much (too much?) time on Facebook, I was also looking forward to Hacker Cup 2011. Last week I accidentally ended up qualified for the first proper round (accidentally, because I had other engagements for the whole qualifier weekend, and my submission was late, but accepted). So I spent a while during the week preparing. Checked out the proper time zones (as Facebook events don't care about time zones at all, which has a number of awkward results for global events....), did a bit of practice on Coderloop, that sort of thing.
This "Round 1" supposed to have 3 sub-rounds to cater for the global audience, and those worked out to be Saturday night 11pm-2am, Sunday morning 5am-8am and Monday morning 5am-8am for me in Taiwan. Saturday evening went to a wedding a few hundred km away, so didn't expect to be back on time, but arrived home around halfway the first sub-round. Got a coffee (at midnight, that's livin' on the edge:) and set out to check the problem sets. I though maybe I qualify and then don't have to stay up, but even if I don't (since everyone else had a 1.5h lead) it's good for practice. I finished one of the problems and wanted to submit it, but it didn't work. Checked out the Facebook wall, and the organizers were saying "we know there's something wrong, just keep trying and it'll work". I tried a while longer, but then I noticed a newer announcement: they ended the round 20 minutes early because of the high number of people having problems, and I should stick around to see whether there will be 2nd and 3rd sub-round.... 2am, 3am came and passed, of course everyone's complaining on the Hacker Cup wall, and then came the decree: all other subrounds postponed at least until next week. Not very nice, considering people do make plans for things that are announced months in advance. Also, the story of the shoddy organization of the event does not end here, but a bit more about this later.
After sleeping off the slight disappointment and the lot of coffee, today I wanted to finish the other problem sets, since one can always learn from these and sometimes I just cannot rest until something is "done". That was basically the better part of today, and here's what I've learned.
*** Spoiler alert ***
Problem 1: After the dance battle
Can find the problem description on the practice page, clicking on the name on the left. That description annoyed me to no end. Maybe I'm a bit unfair because TopCoder also like to write problem sets that read more like fiction. But they at least make sure that every relevant thing is mentioned and everything is consistent. Here? No sign of that. Eg. two thing that stands out:
Probably it is very inefficient, but still finishes quicker than I can say "Dijkstra". And it is done, just loop through the test cases. The complete program, with the graph setup and the rest is in my github repo.
(Also, see update at the end of the post.)
Problem 2: Power Overwhelming
This one has a bit more story to it: apparently FB's own programmers messed it up and the output of the test cases was wrong in 3 times out of 5... They took it off from the practice page too, but someone posted it as a question on Stackoverflow, so there you can read the setting.
It is basically an integer programming algorithm. Most people outside of programming but still somewhat mathematically inclined would think that "integers are easier than real numbers". That is not true for optimization, where integers force another constraint on the problem that solvable mostly by brute force search.
Anyway, even if the problem is taken off from the list, I was still curious of the solution. I banged my head in the wall for quite a while because I couldn't find an good mathematical formula to solve the problem (the real number solution would be trivial). "If you first don't succeed, try-try again." But sometimes should just stop and step back. When I found the said Stackoverflow page, that's when it dawned on me: I was right there at the solution but just didn't accept it: if there's no other way to optimize, then do a brute force search. Duh. So there it is, the method:
Find the optimal solution in case of real numbers. Vary the values in that neighbourhood in a small, but big enough range and choose the best result.
I'm still not completely satisfied because I'm not 100% sure I have the global best solution, but it should be. The original test set's numbers make this harder to check. They chose "long" integers on purpose to make the problem harder to tackle. Anyway, my results are also in the github repo, and for the original test input
Please let me know if there's a mistake (#1,3,4 are different from the FB original, but I checked that my results are better than the original answers.)
Problem 3: First or Last
This was the problem I originally solved, it is shown on the practice page, click on the name on the left.
Not sure how anybody else solved it, but I used two insights that I gained recently while working on other puzzles:
After this sorting there's just a matter of choosing the number of segments where one has to overtake to get to the lead and calculate the overall probability of survival on this (obviously very dangerous track).
There's one more catch, though, this problem was made more complicated as well, but in the good aim of making the solution unique: the results have to be returned as reduced fractions... At this point I getting ready to get down to some greatest common divisor magic and such, but Python comes to the rescue again: the fractions module has everything I needed...
So, this code is also in the github repo.
Afterword
I really don't want to risk getting myself excited about the re-do of this round, but I know my curiosity will get the best of me and I'll be there no matter what time zone it will be or what new ambiguous problem setting FB will come up with. But I also know, that now I'll look out for the Google Code Jam even more, somehow they feel much more thorough.
But first, some sleep... :P
Update: Actually that Dijkstra's algorithm I put here for Problem 1 is a bastardized version, a stupid cross that is neither breadth-first search (since it's using weights, just all of them are equal to 1), nor Dijkstra (since all weights are fixed)... Should better rewrite it. The Algorithm Design Manual actually suggests other methods if the problem is path-search (or "motion-planning") like here, but without much detail. I guess all of that is an overkill for this situation: a breadth-first search should be just fine and simplify things. I don't regret that I did Dijkstra first, though. This is a learning process...
This "Round 1" supposed to have 3 sub-rounds to cater for the global audience, and those worked out to be Saturday night 11pm-2am, Sunday morning 5am-8am and Monday morning 5am-8am for me in Taiwan. Saturday evening went to a wedding a few hundred km away, so didn't expect to be back on time, but arrived home around halfway the first sub-round. Got a coffee (at midnight, that's livin' on the edge:) and set out to check the problem sets. I though maybe I qualify and then don't have to stay up, but even if I don't (since everyone else had a 1.5h lead) it's good for practice. I finished one of the problems and wanted to submit it, but it didn't work. Checked out the Facebook wall, and the organizers were saying "we know there's something wrong, just keep trying and it'll work". I tried a while longer, but then I noticed a newer announcement: they ended the round 20 minutes early because of the high number of people having problems, and I should stick around to see whether there will be 2nd and 3rd sub-round.... 2am, 3am came and passed, of course everyone's complaining on the Hacker Cup wall, and then came the decree: all other subrounds postponed at least until next week. Not very nice, considering people do make plans for things that are announced months in advance. Also, the story of the shoddy organization of the event does not end here, but a bit more about this later.
After sleeping off the slight disappointment and the lot of coffee, today I wanted to finish the other problem sets, since one can always learn from these and sometimes I just cannot rest until something is "done". That was basically the better part of today, and here's what I've learned.
*** Spoiler alert ***
Problem 1: After the dance battle
Can find the problem description on the practice page, clicking on the name on the left. That description annoyed me to no end. Maybe I'm a bit unfair because TopCoder also like to write problem sets that read more like fiction. But they at least make sure that every relevant thing is mentioned and everything is consistent. Here? No sign of that. Eg. two thing that stands out:
- the example input already violates the valid input formatting (the N number of test cases should be between 10 and 50, while in the test it is 5. This is not a problem that should break anybody's reasonable solution, but looks careless.
- the problem is described as taking place on a square lattice, and one moves between "adjacent" squares. There's never a definition of that adjacency. I could reasonably imagine a setting when squares diagonally to each other (i.e. the NE, NW, SW, and SE directions) are "adjacent". The only reason I know this is not the case here and only shared sides make squares adjacent is because otherwise my results for the test cases are different. Careless again.
def dijkstra(graph, start, end): """ Dijkstra's algorithm, based almost entirely on the pseudo-code in wikipedia. """ dist = {} prev = {} for node in graph.keys(): dist[node] = float('inf') prev[node] = None dist[start] = 0 queue = graph.keys() while len(queue) > 0: node = sorted(queue, key=lambda val: dist[val])[0] # Since just looking for the path from start to end if (node == end): break if dist[node] == float('inf'): break queue.remove(node) for neighbour in graph[node]: if (neighbour in queue): alt = dist[node] + 1 if alt < dist[neighbour]: dist[neighbour] = alt prev[neighbour] = node # Find the number of steps between start and end steps = 0 node = end while prev[node]: steps += 1 node = prev[node] return steps
Probably it is very inefficient, but still finishes quicker than I can say "Dijkstra". And it is done, just loop through the test cases. The complete program, with the graph setup and the rest is in my github repo.
(Also, see update at the end of the post.)
Problem 2: Power Overwhelming
This one has a bit more story to it: apparently FB's own programmers messed it up and the output of the test cases was wrong in 3 times out of 5... They took it off from the practice page too, but someone posted it as a question on Stackoverflow, so there you can read the setting.
It is basically an integer programming algorithm. Most people outside of programming but still somewhat mathematically inclined would think that "integers are easier than real numbers". That is not true for optimization, where integers force another constraint on the problem that solvable mostly by brute force search.
Anyway, even if the problem is taken off from the list, I was still curious of the solution. I banged my head in the wall for quite a while because I couldn't find an good mathematical formula to solve the problem (the real number solution would be trivial). "If you first don't succeed, try-try again." But sometimes should just stop and step back. When I found the said Stackoverflow page, that's when it dawned on me: I was right there at the solution but just didn't accept it: if there's no other way to optimize, then do a brute force search. Duh. So there it is, the method:
Find the optimal solution in case of real numbers. Vary the values in that neighbourhood in a small, but big enough range and choose the best result.
I'm still not completely satisfied because I'm not 100% sure I have the global best solution, but it should be. The original test set's numbers make this harder to check. They chose "long" integers on purpose to make the problem harder to tackle. Anyway, my results are also in the github repo, and for the original test input
5 15 10 658931394179 92 37 304080438521 39 100 972826846685 24 89 306549054135 64 16 254854449271my results are:
21964379805 1652611083 12472139015 6386438607 1991050385
Please let me know if there's a mistake (#1,3,4 are different from the FB original, but I checked that my results are better than the original answers.)
Problem 3: First or Last
This was the problem I originally solved, it is shown on the practice page, click on the name on the left.
Not sure how anybody else solved it, but I used two insights that I gained recently while working on other puzzles:
- If the relevant inputs and outputs are integers, try to avoid floating point arithmetic if possible (i.e. no division). This will eliminate the possibility of rounding errors.
- Sorting is extremely useful and can answer a lot of questions if one chooses the right sorting criteria.
(a-1)/a * (y-1)/y ??? (b-1)/b * (x-1)/xwhich can be reorganized to get rid of the divisions as
(a-1) * b * x * (y-1) ??? a * (b-1) * (x-1) * y
After this sorting there's just a matter of choosing the number of segments where one has to overtake to get to the lead and calculate the overall probability of survival on this (obviously very dangerous track).
There's one more catch, though, this problem was made more complicated as well, but in the good aim of making the solution unique: the results have to be returned as reduced fractions... At this point I getting ready to get down to some greatest common divisor magic and such, but Python comes to the rescue again: the fractions module has everything I needed...
So, this code is also in the github repo.
Afterword
I really don't want to risk getting myself excited about the re-do of this round, but I know my curiosity will get the best of me and I'll be there no matter what time zone it will be or what new ambiguous problem setting FB will come up with. But I also know, that now I'll look out for the Google Code Jam even more, somehow they feel much more thorough.
But first, some sleep... :P
Update: Actually that Dijkstra's algorithm I put here for Problem 1 is a bastardized version, a stupid cross that is neither breadth-first search (since it's using weights, just all of them are equal to 1), nor Dijkstra (since all weights are fixed)... Should better rewrite it. The Algorithm Design Manual actually suggests other methods if the problem is path-search (or "motion-planning") like here, but without much detail. I guess all of that is an overkill for this situation: a breadth-first search should be just fine and simplify things. I don't regret that I did Dijkstra first, though. This is a learning process...
Friday, 31 December 2010
On being outclassed
I got a kick out of solving puzzles. The feeling of churning (rusty) gears in my head is a great feeling - and many times unstoppable. That's why I could, after a nice lunch in our Common Room, go on with various Sudoku versions for ages. But as I find many games boring after a while, programming never really got to that point so far. That's why programming + puzzles is such a tempting combination for me.
Not that I'm really good at it, au contraire. All I said is I enjoy them. The place which grew synonymous to programming puzzles for me is TopCoder, a site with regular "single round matches" or SRMs, which are basically frantic 75 minutes coding fests, for score and glory. And learning. I'm on TopCoder since 2003, but only took part in a handful of SRMs, about one a year, while they are held almost every week. The schedule of the one this week was kind to my timezone, 10am is good to do anything that requires great deal of thinking. It was really fun, and while I haven't got a great score, I got some (unlike in the last few matches :P ) and it was interesting to see the differences between good and great coders.
The ranking system divides people into two divisions based on their scores. In every SRM there are three problems, in increasing difficulty, Level One, Two and Three. Divison Two's Level Two and Three problems are the same as Division One's Level One and Two, thus one can see how people in the different division solve the same question. Oh, because you can see, check and challenge every submitted code.
This time I finished the easiest problem, but run out of time on the second one (see links to the problems and explanation in the SRM 492 Match analysis). So I was really curious how does the best solution (highest score) looks like in my division. It's two pages of code with multiple helper functions, global variables and the likes. Even the second and third were similar in essence.
Then I took a look at the best solution of the top people: about 3/4 of a page, single function, concise, organized.... And all that had to be written in a very short time to have such high score. (Could check some of this out from the SRM 492 Match statistics).
All this got me thinking, there has to be a qualitative difference between the good and great people, not just the "quantitative" difference in the score. To solve the problems you are given (in this competition, or any programming) most programmers who try hard enough would get somewhere. But what makes a person who can code so clearly? More practice? Or is it more like going from Siddhattha to Buddha, a transition that goes one way: you are either enlightened, or not, and if you are, you'll never look at the world the same way.
Thinking about it, this qualitative difference must be here present in many other things as well. Like all the "good" coffees that I drink to fuel the brain, but that cannot prepare me for the occasional find of "great" coffee. The two share almost only the name. Just like programmers: same title, different universe.
Probably I'll never get great, but I'll sure will try. And how? Practice and learn more from people ahead of me (not hard, on TopCoder I'm currently at the 13th percentile, such a crying shame:). There are other places as well to solve maybe different kind of problems Coderloop and Facebook engineering puzzles. But once, just once, I'd like to do the TopCoder Marathon Match. But that's for a different post.
Ps: the coffee example just popped up because I had a really great one yesterday. Almost 24h ago and I can still feel its effect.
Not that I'm really good at it, au contraire. All I said is I enjoy them. The place which grew synonymous to programming puzzles for me is TopCoder, a site with regular "single round matches" or SRMs, which are basically frantic 75 minutes coding fests, for score and glory. And learning. I'm on TopCoder since 2003, but only took part in a handful of SRMs, about one a year, while they are held almost every week. The schedule of the one this week was kind to my timezone, 10am is good to do anything that requires great deal of thinking. It was really fun, and while I haven't got a great score, I got some (unlike in the last few matches :P ) and it was interesting to see the differences between good and great coders.
The ranking system divides people into two divisions based on their scores. In every SRM there are three problems, in increasing difficulty, Level One, Two and Three. Divison Two's Level Two and Three problems are the same as Division One's Level One and Two, thus one can see how people in the different division solve the same question. Oh, because you can see, check and challenge every submitted code.
This time I finished the easiest problem, but run out of time on the second one (see links to the problems and explanation in the SRM 492 Match analysis). So I was really curious how does the best solution (highest score) looks like in my division. It's two pages of code with multiple helper functions, global variables and the likes. Even the second and third were similar in essence.
Then I took a look at the best solution of the top people: about 3/4 of a page, single function, concise, organized.... And all that had to be written in a very short time to have such high score. (Could check some of this out from the SRM 492 Match statistics).
All this got me thinking, there has to be a qualitative difference between the good and great people, not just the "quantitative" difference in the score. To solve the problems you are given (in this competition, or any programming) most programmers who try hard enough would get somewhere. But what makes a person who can code so clearly? More practice? Or is it more like going from Siddhattha to Buddha, a transition that goes one way: you are either enlightened, or not, and if you are, you'll never look at the world the same way.
Thinking about it, this qualitative difference must be here present in many other things as well. Like all the "good" coffees that I drink to fuel the brain, but that cannot prepare me for the occasional find of "great" coffee. The two share almost only the name. Just like programmers: same title, different universe.
Probably I'll never get great, but I'll sure will try. And how? Practice and learn more from people ahead of me (not hard, on TopCoder I'm currently at the 13th percentile, such a crying shame:). There are other places as well to solve maybe different kind of problems Coderloop and Facebook engineering puzzles. But once, just once, I'd like to do the TopCoder Marathon Match. But that's for a different post.
Ps: the coffee example just popped up because I had a really great one yesterday. Almost 24h ago and I can still feel its effect.
Friday, 10 December 2010
Chrome Web (Candy) Store
My first browser was Netscape in 1997. Never really cared for IE for many reasons. Then along came Firefox and I really liked it. Started to spread the word, convert family and friends - with reasonable success. Then Google Chrome arrived. I really adored it. It was fast, useful, kinda no-nonsense browser that I wanted. But at that time it didn't have any extension support so I went back to Firefox for most of Internet adventures, only had a few "flings" every now and again. But then it got out of control and I eloped with Chrome... Now I only use that whenever I can (even if sometimes running the latest developement version, like 10.0.607.0 at the moment, can be slightly painful, but that's by definition).
When Chrome got extensions, they felt like an evolutionary step. I can install something in the browser that does not need restart? It was like going from Windows to Linux where instead of waiting for the reboot all the time you could run the same system for weeks and months if you wanted to. A bliss...
Don't remember when I first heard of the Chrome Web(App)Store, but I do recall that I didn't think much of it. The idea looked neat but not something that I would use. A few days ago, however, it got a bigger advertising push, had new features (as much as I can tell) like syncing the apps between different computers (the extensions did that before as well),... So I set out to try it.
It's like first trip to the candy store - I go and install every one of them that looks interesting. So what do I have now?
First impression of the whole Web Store that they are still looking for their own definition, what is an app - and most crucially, how's an app different from an extension or even a bare bookmark?
My thought so far: in most cases the difference is from marginal to non-existent. This is especially obvious for Google's own "apps": Gmail, Finance, Blogger, .... Those are just links to the respective websites. Why make it an app, then? I have two guesses, which are not exclusive:
well, they cannot be left out, duh!
many of the websites we use now are closer to the stand-alone desktop software we know very well than to a traditional website. This might be a nudge of reminder: "this is not a website, this is a web 2.0 application!"
The second one seems to be the more important insight, the blurring boundary of what's on the web and what's off. Guess the distinction is made when you go in a lecture room in the basement of your university and there are no plugs and waves and 2,3,3.5,4G connections... If you can still take notes? App. If have to ask your neighbor for paper, and yeah, pen, then it's Web 2.0.... Duh.
And even besides the Google apps, most others are just links. Some feel more "linky" than other. That's partly a design challenge (there are some truly gorgeous apps, sure) and partly an UI challenge of when to ask for that darn "sign-up" that will bring the business in. The pushy apps, and the ones that don't have Gmail login feel more "old-school" than the seamlessly integrated ones. I definitely don't want to make a new login for every app I install. That wouldn't be more convenient but more annoying than the desktop cousins.
So, what do I installed (and kept for more than 10 seconds)?
When Chrome got extensions, they felt like an evolutionary step. I can install something in the browser that does not need restart? It was like going from Windows to Linux where instead of waiting for the reboot all the time you could run the same system for weeks and months if you wanted to. A bliss...
Don't remember when I first heard of the Chrome Web(App)Store, but I do recall that I didn't think much of it. The idea looked neat but not something that I would use. A few days ago, however, it got a bigger advertising push, had new features (as much as I can tell) like syncing the apps between different computers (the extensions did that before as well),... So I set out to try it.
It's like first trip to the candy store - I go and install every one of them that looks interesting. So what do I have now?
First impression of the whole Web Store that they are still looking for their own definition, what is an app - and most crucially, how's an app different from an extension or even a bare bookmark?
My thought so far: in most cases the difference is from marginal to non-existent. This is especially obvious for Google's own "apps": Gmail, Finance, Blogger, .... Those are just links to the respective websites. Why make it an app, then? I have two guesses, which are not exclusive:
well, they cannot be left out, duh!
many of the websites we use now are closer to the stand-alone desktop software we know very well than to a traditional website. This might be a nudge of reminder: "this is not a website, this is a web 2.0 application!"
The second one seems to be the more important insight, the blurring boundary of what's on the web and what's off. Guess the distinction is made when you go in a lecture room in the basement of your university and there are no plugs and waves and 2,3,3.5,4G connections... If you can still take notes? App. If have to ask your neighbor for paper, and yeah, pen, then it's Web 2.0.... Duh.
And even besides the Google apps, most others are just links. Some feel more "linky" than other. That's partly a design challenge (there are some truly gorgeous apps, sure) and partly an UI challenge of when to ask for that darn "sign-up" that will bring the business in. The pushy apps, and the ones that don't have Gmail login feel more "old-school" than the seamlessly integrated ones. I definitely don't want to make a new login for every app I install. That wouldn't be more convenient but more annoying than the desktop cousins.
So, what do I installed (and kept for more than 10 seconds)?
- Graphics apps. They need to improve but I can see their potential, mostly for quick and dirty editing.
- Aviary has a bunch of those (along with e.g. music editor), I have Advanced Image Editor (compete with Irfanview or GIMP) and the Vector Editor (compete with Inkscape). We'll see how they'll work out. Though I was surprised that it's flash instead of HTML5. Guess I expected too much
- Sketchpad, because looks cool, otherwise yet another Paint clone.
- Productivity/Work, this seems to be a lot of very similar apps, I tried a few in many categories, currently using these ones:
- Todo.ly, looks quite useful, simple and under developement. I had so many todo lists, but still the paper is best. This is quite okay so far. And it has an API, could think of what to do with it. Even though it merely a link, does not feel like that. Appness is quite justified, I think.
- 280 Slides, well, I make most of my presentations in Latex/Beamer. But this chould be handy when something's needed quick. Though have to investigate if it can export things in a format other than .pptx (brrrr).
- Write Space, this is one that I would really call an app, since it is really standalone and works offline. Pretty much a PyRoom clone, but a good one. Goal: distraction free writing. I think it gets there. I'd request the ability of editing more than one document, though.
- LucidChart, flowchart, mindmap, wireframe, UI mockup site. Haven't tried it much, kinda "just in case". Seems to be team-oriented, collaborative editing.
- Weeb.ly, website builder. For the ease of it. If there's something I care about I'll create it in a text editor anyway.
- Communications/Mobile:
- Seesmic, a Twitter/Facebook/Linkedin client. Not bad, though I'm not sure how long I will use it: Twitter interface is very similar and already got used to it, Facebook takes up too much of my time already, and I'm on Linkedin not because of the status updates. I feel obliged to try it, though, since on Android it is the one I use.
- Android Push Contacts, regardless the name it is to send SMS from your computer, through your phone. Also, received messages could show up on your desktop/laptop. I installed because it seems like a very convenient idea. I need to think of a reason/recipent to send an SMS, though, 99.99% of my business is email now. It is also open source (the Android part, the AppEngine site and the Chrome extension as well) so could be an interesting source of learning.
- Fiabee, file sync between Chromes and other devices, phones included. Have to try it in detail. Looks like a more flashy Dropbox with smaller space and more features.
- Random
- Google eBooks, also slightly "linky", but feels like a library (well, it should!) so I guess an app is good for this. Going to check my favorite public domain books, I haven't commited myself to buying electronic versions yet.
- Geni, a family tree creator. Great idea and I was looking for proper family tree visualization software before (though the jury is still out on how "great" this is), so I'll probably use it. Feels very weird that the program has so many fields to fill out about every single person - but if I think about it, there's nothing unusual. On the contrary, other sites few "bullet points" hollow out the differences between people. Nevertheless, I'm trying to set the right boundaries between usefulness of the site and my willingness of giving them any info.
Sunday, 7 November 2010
Part of the network
Went to see The Social Network today. It was good storytelling and even if tones down some aspects of it (there were too many comments from characters which they could only really make if they see into the future), life can be as strange or stranger than (pure) fiction. I enjoyed it a lot, probably because it struck a bell with what I know about computer science reference, the hacker culture, startups. Also probably because I "was" in there in 2004 when Facebook just started to take off abroad, and I had to have an ox.ac.uk email address to sign up. ;)
Of course, a lot has changed since then, and I have changed a lot too. For example, I did have a (disastrous) interview with Facebook earlier this year, and a similarly bad (but slightly more encouraging) one with Google. Despite not being a CS major. And despite never applying there, but they came and asked me.
The outcome of those interviews are not really surprising for me, but before both of them there was a time when I had to seriously consider - what if I DO get the job? Would I like it? Could I go from hobby to profession? Would I regret "giving up on Science"? Well, whatever were my thoughts at that time, it does not matter, since I'm still "here". But looking at the movie, the atmosphere, the offices, the workstyle... I feel I could give it a try. :) (no, not the "coke off a girl's belly" type of parties, that's the business section of the company I don't especially care for)
So what would it take to get a job there? I feel if I had a few months off, let's say at least 3 or 4, I could polish up on the things I needed and could - not necessary ace it, but - do very well for a not strictly speaking professional. If I had to and I wanted to.
On the other hand, if I take that much time and dive into any area of physics (which I supposed to do as my chosen profession anyway), how much would I gain? Could I ace that? I really hope I could.
But I find choosing between these two paths that seem to be open (even if just a tiny little crack) very difficult, since they are almost have nothing in common. I've found no good personal measure of success yet, maybe that would be a good start.
And in the meantime, if I'm not inclined to pick up the arrogance, but I think I can still learn some creativity and perseverance from this fictional "Mark".
Of course, a lot has changed since then, and I have changed a lot too. For example, I did have a (disastrous) interview with Facebook earlier this year, and a similarly bad (but slightly more encouraging) one with Google. Despite not being a CS major. And despite never applying there, but they came and asked me.
The outcome of those interviews are not really surprising for me, but before both of them there was a time when I had to seriously consider - what if I DO get the job? Would I like it? Could I go from hobby to profession? Would I regret "giving up on Science"? Well, whatever were my thoughts at that time, it does not matter, since I'm still "here". But looking at the movie, the atmosphere, the offices, the workstyle... I feel I could give it a try. :) (no, not the "coke off a girl's belly" type of parties, that's the business section of the company I don't especially care for)
So what would it take to get a job there? I feel if I had a few months off, let's say at least 3 or 4, I could polish up on the things I needed and could - not necessary ace it, but - do very well for a not strictly speaking professional. If I had to and I wanted to.
On the other hand, if I take that much time and dive into any area of physics (which I supposed to do as my chosen profession anyway), how much would I gain? Could I ace that? I really hope I could.
But I find choosing between these two paths that seem to be open (even if just a tiny little crack) very difficult, since they are almost have nothing in common. I've found no good personal measure of success yet, maybe that would be a good start.
And in the meantime, if I'm not inclined to pick up the arrogance, but I think I can still learn some creativity and perseverance from this fictional "Mark".
Wednesday, 25 August 2010
Keyboarding
Spending upwards to 8 hours every day on computers, I really start to feel that many times the productivity bottleneck is not in the brain, but in the channel that transfers my thoughts into the machine - the input. Keyboards and mice are more important than I have previously considered. I WAS interested in them, but curiosity, not much serious thought about what do I expect, what qualities I require... These days, however, I use multiple computers in a single day (some day 4 or 5, guess not much to true geek, but there you go), all of them different keyboards and some annoy the hell out of me....
After some consideration, what do I want? (not an exhaustive list)
So basically, at the moment I like something along the lines of the classic keyboard, something like this...
Maybe it can have a bit more border on the bottom, and a larger main Enter. But these are quite superficial things compared to other keyboards' issues... Notice the good arrangement of the PgUp/Dn keys and friends: 2x3...
My current desktop keyboard (Asus):
Fn key in the place of Ctrl? Just ridiculous... Have altogether 5 keys that needs Fn (Print Screen, Scroll Lock, System Request, Pause, Break), and there's still space above the numpad to place them... Why??? And during the day I got to get used to this arrangement (Fn and then Ctrl from left to right), then every other keyboard I have has it the opposite way (Ctrl and then Fn). Cue confusion...
Also, Insert & Delete is placed in the top line with the F1-12 function keys, Home/End/PgUp/PgDn in a vertical line... Quite inconvenient.
EeePC 8G:
Small keyboard even for a laptop thus PgUp/Dn with Fn key and no inclination, but still one of my favorite. Keys have reduced tactile feedback but a good balance between feedback and noise so feels good... The key separation is too small for many, but I like it a lot. Not having a "Windows key" on it is a bonus point. :)
Sony Vaio:
I use this sometimes - combined with Chinese Vista weirdness (though that's not the keyboard's fault per se) one of the most tiresome typing experience. Such huge key separation, feels like olympic gymnastics...
After some consideration, what do I want? (not an exhaustive list)
- Good tactile feedback
Tactile feedback is great when typing blind (which is really the only way to be efficient). Some keyboards (especially laptop ones) have very much reduced feedback, some desktop keyboards (older ones) are pretty damn hard to press and make me fingers tired. Balance these out.... - Separation between keys, but not too big separation
Keys without gaps between them will make typing much more error prone. Too much separation and it becomes an exercise instead of flow... - The bottom-left corner should be Ctrl
Ctrl is a very frequently used key indeed... So it should be easy to locate, easy to press. When in the original place of bottom-left, one can use the side of the palm to press down and not much thought needed where it is... Moving the Ctrl key away from there is just evil, I tell ya..... - Stand-alone keys for Page Up/Down, Home, End, Insert, Delete
To save space on keyboards (mostly laptop ones, understandably) designers put these keys together with others, and use a Function (Fn) key to activate them... But these are pretty frequently used keys (navigation on page/text lines, copy-paste, ...), why do you have to be slowed down by looking for another key to do that? When I see the same design on desktop keyboard, it feels even worse. Sure, got to have them, there are applications (computerized cash registers come to mind) that needs small keyboard first and foremost, and they use a reduced set of keys. But for every other desktop, get a full set... - Slight inclination
Don't like flat keyboards much, desktop keyboards used with their wee legs can be much more comfortable. - Bonus point: decent sized Enter, numpad, reduced typing noise
Enter is a very frequently used key, just like Space. Why not make it proper size to make it quick and robust to type it?
Numpad just makes it so easy to type numbers (duh), quite underappreciated by some.
Laptop keyboards are awesome if one wants to type silently, in the evenings desktop keyboards can be very annoying...
So basically, at the moment I like something along the lines of the classic keyboard, something like this...
Maybe it can have a bit more border on the bottom, and a larger main Enter. But these are quite superficial things compared to other keyboards' issues... Notice the good arrangement of the PgUp/Dn keys and friends: 2x3...
My current desktop keyboard (Asus):
Fn key in the place of Ctrl? Just ridiculous... Have altogether 5 keys that needs Fn (Print Screen, Scroll Lock, System Request, Pause, Break), and there's still space above the numpad to place them... Why??? And during the day I got to get used to this arrangement (Fn and then Ctrl from left to right), then every other keyboard I have has it the opposite way (Ctrl and then Fn). Cue confusion...
Also, Insert & Delete is placed in the top line with the F1-12 function keys, Home/End/PgUp/PgDn in a vertical line... Quite inconvenient.
EeePC 8G:
Small keyboard even for a laptop thus PgUp/Dn with Fn key and no inclination, but still one of my favorite. Keys have reduced tactile feedback but a good balance between feedback and noise so feels good... The key separation is too small for many, but I like it a lot. Not having a "Windows key" on it is a bonus point. :)
Sony Vaio:
I use this sometimes - combined with Chinese Vista weirdness (though that's not the keyboard's fault per se) one of the most tiresome typing experience. Such huge key separation, feels like olympic gymnastics...
Android:
This is in software. When in portrait mode, the one-finger (index) typing is pretty tedious, but in landscape mode the two finger (thumbs) typing is one of the fastest I can do on any keyboards (mostly until I have to correct a typo). With predictive text it can be even faster, but can also break the flow - I usually turn prediction off.
Non-alphabetic keys are on two alternative keyboard settings (accessed by the "12#" key, plus another one), though there are some on speed-button (hold the "12#", shortcut appears and then release over the desired key), though maybe I'd have chosen different subset of keys for that.
Weird thing is that the touchscreen seems to be less sensitive to my left thumb than to the right one - which is not really the screens problem, but the way I'm "thumbing". I don't feel the difference, but would help to figure it out.
Now look at some of the weird keyboards that I haven't used but would be interesting for one thing or another...
Optimus Maximus:
Keyboard for about $1300? Well, when every key is a reconfigurable little screen, the costs add up... The idea is great - a completely adaptable keyboard (e.g. show the capital versions of letters when pressing Shift, or define custom labels for the keys used in your favorite game). The execution, however, seems to be not as good. Reviewers say that it is quite uncomfortable... I'll probably use my one grand for something else (though one can by the keyboard key-by-key too :)
Laser-projected virtual keyboard:
Connected through Bluetooth, mostly aimed at PDAs and smartphones, this is a special one... Project the keyboard on any flat surface, monitor the reflection and there you have your "key-press". Never tried it, but wanted to do for years... I guess it would fail my requirements with zero feedback, slow typing, and small number of keys, but can see advantages sometimes: zero noise, completely portable, and just plan cool....
Rubber-body:
This one I actually tried in the store a little bit (just on its own, not connected to computer, though), and it's surprisingly okay. The idea is, that one doesn't need much to make a key, just an enclosed pressure sensor, the enclosure can be anything - e.g. rubber. The tactile feedback is quite good (and not too springy:). The key separation can be a bit tricky, but I guess there are so many things that the mind have to get used to about the whole board, that it's manageable.
Also, one can just fold up the board when finished, no problems with cleaning (did you know, that keyboards are dirtier than the inside of most toilets? I can totally believe that...)
Anyway.... after typing this much about the things I used to type, should change from meta-work (née procrastination) into real work mode... But I keep my eye out for better boards, if you have any advice, would love to hear it....
Friday, 25 December 2009
Reverse engineering a multimeter's language
Some time ago I got myself checking all the gadgets in our lab, to see which ones can I connect to a computer. This is mostly because I like our gadgets and I like our computers, but together they are just that more awesome... Been checking oscilloscopes, function generators, counters, even high power lasers. Most of them seemed to have a decent (well, at least usable) manual, so I could do some basic communication right away, until I come up with some more complete awesome plan to automate the lab. But there are always exceptions to the rule, even if tiny ones.
There was this Pro's Kit 3PK-345 Digital Multimeter (pictured on the right), that could do RS-232 serial communication. Of course, this is not the most crucial equipment of them all, but could be quite useful for rough-and-ready monitoring and logging, since it has so many different measurement functions.
The problem is that there's no clue in the manual, how to communicate with it, what are the settings, what's the "language", are there any quirks... There was, however, a little piece of monitoring software on an attached CD-ROM, that was able to talk the multimeter. But it was a very basic piece ofgarbage program. I think it was really just thrown together, and I base this theory on the fact that the program icon is the default Delphi project icon (that one on the left) - I know it because used it like 10 years ago for a while (oh, my, even for some pocket money, but that's another post). And if you don't bother to change the icon, I don't have much confidence in the rest of the code...
But at least it worked. At least there was an example, how I should do it, and could set thehounds serial sniffers on it and check out what are those two talking about... That was another adventure, serial sniffer... Tried quite a few under Windows until found the right one giving all the information I needed. But after all the testing I don't even remember what it was, maybe an evaluation copy of a paid software.
The problem is that there's no clue in the manual, how to communicate with it, what are the settings, what's the "language", are there any quirks... There was, however, a little piece of monitoring software on an attached CD-ROM, that was able to talk the multimeter. But it was a very basic piece of
But at least it worked. At least there was an example, how I should do it, and could set the
From the sniffing, I got the initialization:
That tells me most of the settings, plus that the multimeter needs that few extra DTR/RTS cycles (indeed, without those I couldn't get it to talk). Another thing the sniff told me, that multimeter works in request/reply mode, that is I have to initiate a reading by sending the sequence "D\r" (\r is the "carriage return") to the device to which it will reply.
Now, let's get down to business. Using PySerial, it is very easy to set things up:
Notes:
The main annoying thing I found, that there are a number of different notations for "over limit" or invalid values, such as "O.L", "OL.", and "OL". And here it is a "capital-O", not a "zero", thus it needs extra checking in the code, not just simple conversion. Nevertheless....
I wrote to the company as well to ask for some inside info on this "reverse-engineered" language, and they are promised to give me a complete reference. Yeah, "getting back" to me since the end of August. Fortunately they help is probably not needed anymore, I just wanted to double-check it, and also to test them how do they respond to such request. Initial impression was very favorable, but that has kinda decayed since then...
Now, because the language is so simple, it is very easy to implement the code "speaking" multimeter. A very bad version Python version can be found in my GitHub repo, where I keep all the other hardware related stuff as well. I seriously should get some Bad Code Offset for this one, but at least it works so far.
Baud rate 600 RTS off DTR on Data bits=7, Stop bits=2, Parity=None Set chars: Eof=0x00, Error=0x00, Break=0x00, Event=0x00, Xon=0x11, Xoff=0x13 Handflow: ControlHandShake=(DTR_CONTROL), FlowReplace=(), XonLimit=0, XoffLimit=4096 DTR on RTS off DTR on RTS off
That tells me most of the settings, plus that the multimeter needs that few extra DTR/RTS cycles (indeed, without those I couldn't get it to talk). Another thing the sniff told me, that multimeter works in request/reply mode, that is I have to initiate a reading by sending the sequence "D\r" (\r is the "carriage return") to the device to which it will reply.
Now, let's get down to business. Using PySerial, it is very easy to set things up:
ser = serial.Serial('/dev/ttyUSB0',baudrate=600, bytesize=7, stopbits=2, \ parity=serial.PARITY_NONE, timeout=1, xonxoff=1, rtscts=0, dsrdtr=1) ser.setDTR(1) ser.setRTS(0) ser.setDTR(1) ser.setRTS(0) ser.setDTR(1) ser.setRTS(0) line = ser.readline()
Notes:
- This is on Linux, but works on Windows too by replacing /dev/ttyUSB0 with COMx, where x=1,... the appropriate serial port number
- I just have to write "D" to the port, because it will automatically attach the carriage return
- First two characters tell the type of the reading, though one needs the unit (last part of string) to completely determine the type for current and voltage measurements. It's followed by a space
- Five characters contain the reading value (signed).
- Last 1-4 characters tell the units of the measurement.
The more or less complete catalog is the following:
### Output (13 chars) and Line ending (+1 not shown: \r /carriage return,0x0D/) DC -0.000 V (DC Voltage) AC 0.000 V (AC Voltage) OH O.L MOhm (Resistance, ohm mode, no connection) OH 0.008kOhm (kOhm measure) OH 080.8 Ohm (Ohm measure OH OL. Ohm (Resistance, short mode) DI OL mV (Diode) 0000 (hFE /Forward Current Gain/ mode) TE - OL C (Temperature, without connection) TE 0024 C (Temperature, with thermocouple) CA 0.011 nF (Capacitor, 4nF scale) CA 000.3 nF (400nF scale) DC -0.000 mA (DC current, 4mA scale) DC -000.0 mA (400mA scale) DC -00.00 A (10A scale) AC 0.000 mA (AC current, 4mA scale) AC 000.0 mA (400mA scale) AC 00.00 A (10A scale)
The main annoying thing I found, that there are a number of different notations for "over limit" or invalid values, such as "O.L", "OL.", and "OL". And here it is a "capital-O", not a "zero", thus it needs extra checking in the code, not just simple conversion. Nevertheless....
I wrote to the company as well to ask for some inside info on this "reverse-engineered" language, and they are promised to give me a complete reference. Yeah, "getting back" to me since the end of August. Fortunately they help is probably not needed anymore, I just wanted to double-check it, and also to test them how do they respond to such request. Initial impression was very favorable, but that has kinda decayed since then...
Now, because the language is so simple, it is very easy to implement the code "speaking" multimeter. A very bad version Python version can be found in my GitHub repo, where I keep all the other hardware related stuff as well. I seriously should get some Bad Code Offset for this one, but at least it works so far.
Sunday, 29 November 2009
Offline media
Today I went Magazine shopping to one of the 24H open Eslite stores in Taipei. It's a fun place to go, in a multitude of ways:
One big floor for books & magazines.
Two big floors for designer stuff, gadgets and artsy things....
Two big floors for food, with one of the ever best cake stores I have every eaten (Awfully Chocolate).
One final floor with music, stationery and toys....
(Oh, and today a concert in front of the store by a band Wonfu Loves You [旺福爱你])
Anyway, went there to get my monthly / quarterly dose of magazine, since we still love offline media, even if most of my knowledge comes from online.
Make Magazine - Volume 20
This was the primary objective was this one, the new issue of Make. The last one was about DIY Drones (automatic planes:) and got me fired up enough. Didn't get too much done about them, but definitely made a few entries to my Someday/Maybe list. Hopefully with some time it will be even better, it's all a process... Anyway, how can I resist if one of my heros is on the frontpage: Adam Savage from MythBusters. :) He's someone who can really spread the love for science and critical thinking and tinkering...
You know about the "20% time" at Google? All the developers having 20% of their time to develope their own ideas, completely free. How about having something like for physics researchers? 20% of the time for tinkering and developing things that are necessary connected to the current project of the research group, but worthwhile anyway?
Maybe a bit tougher sell, because we (should) already do our own ideas - that's what research is about, isn't it? But still, there are always room to improve....
Anyway, if one is interested, there's also Instructables, no more excuse to be idle. Coming up later: reviews of Make: content.
Wired UK - Ideas Issue
In the section where Make: was I found some other goodies too... Wired is one of my favorite only magazines, but couldn't really justify buying the paper version if I have already read most of the online contents. But! Behold, there's a new UK edition as well! And they had a special issue on big ideas.... Well, really couldn't justify getting all 3 of them, but let's get one. Always looking for inspiration from those people cleverer than me. :)
Reading through it, I really admire the whole design and execution. Still "magazine" meaning relatively disposable content, so it's not going to be very long-lasting for me, but definitely high quality brainfood. Gadgets that I start yearning at first sight, architecture, everyday chemistry, technology, company insights, people profiles, and much more... And a lot of advertisement that (almost) works on me... :P
Not surprised about the awesomeness... After all, being a writer for Wired is one of the entries on my Dream Jobs list... And you know what, this still can be done. Astronaut, probably not anymore.
The Celebrity Twitter Directory
However there are magazines there that feel like a complete ripoff. How about The Celebrity Twitter Directory? Didn't people know about e.g. WeFollow, or Twitter Lists? Why would someone pay more for this thing more than my Make: and Wired combined is beyond me... But certainly Twitter is one of the hottest thing out there right now: Wired had at least 2 proper articles in connection with it (about 6 pages), while the only Facebook related thing I could find was the page-bottom tiny text ticker of the biggest FB groups this month...
There are still some magazines that I would have chosen on an unlimited budget.... The Economist, Nature, Science, Newsweek, Scientific American... Have to check out the library, maybe they have them. And at the university campus where my office is, seem to have quite a few "magazine ladies" who try to sell subscriptions to a large number of these. Pretty good deals as well. Maybe treat myself for Christmas? Got to think about it....
One big floor for books & magazines.
Two big floors for designer stuff, gadgets and artsy things....
Two big floors for food, with one of the ever best cake stores I have every eaten (Awfully Chocolate).
One final floor with music, stationery and toys....
(Oh, and today a concert in front of the store by a band Wonfu Loves You [旺福爱你])
Anyway, went there to get my monthly / quarterly dose of magazine, since we still love offline media, even if most of my knowledge comes from online.
Make Magazine - Volume 20
This was the primary objective was this one, the new issue of Make. The last one was about DIY Drones (automatic planes:) and got me fired up enough. Didn't get too much done about them, but definitely made a few entries to my Someday/Maybe list. Hopefully with some time it will be even better, it's all a process... Anyway, how can I resist if one of my heros is on the frontpage: Adam Savage from MythBusters. :) He's someone who can really spread the love for science and critical thinking and tinkering...
You know about the "20% time" at Google? All the developers having 20% of their time to develope their own ideas, completely free. How about having something like for physics researchers? 20% of the time for tinkering and developing things that are necessary connected to the current project of the research group, but worthwhile anyway?
Maybe a bit tougher sell, because we (should) already do our own ideas - that's what research is about, isn't it? But still, there are always room to improve....
Anyway, if one is interested, there's also Instructables, no more excuse to be idle. Coming up later: reviews of Make: content.
Wired UK - Ideas Issue
In the section where Make: was I found some other goodies too... Wired is one of my favorite only magazines, but couldn't really justify buying the paper version if I have already read most of the online contents. But! Behold, there's a new UK edition as well! And they had a special issue on big ideas.... Well, really couldn't justify getting all 3 of them, but let's get one. Always looking for inspiration from those people cleverer than me. :)
Reading through it, I really admire the whole design and execution. Still "magazine" meaning relatively disposable content, so it's not going to be very long-lasting for me, but definitely high quality brainfood. Gadgets that I start yearning at first sight, architecture, everyday chemistry, technology, company insights, people profiles, and much more... And a lot of advertisement that (almost) works on me... :P
Not surprised about the awesomeness... After all, being a writer for Wired is one of the entries on my Dream Jobs list... And you know what, this still can be done. Astronaut, probably not anymore.
The Celebrity Twitter Directory
However there are magazines there that feel like a complete ripoff. How about The Celebrity Twitter Directory? Didn't people know about e.g. WeFollow, or Twitter Lists? Why would someone pay more for this thing more than my Make: and Wired combined is beyond me... But certainly Twitter is one of the hottest thing out there right now: Wired had at least 2 proper articles in connection with it (about 6 pages), while the only Facebook related thing I could find was the page-bottom tiny text ticker of the biggest FB groups this month...
There are still some magazines that I would have chosen on an unlimited budget.... The Economist, Nature, Science, Newsweek, Scientific American... Have to check out the library, maybe they have them. And at the university campus where my office is, seem to have quite a few "magazine ladies" who try to sell subscriptions to a large number of these. Pretty good deals as well. Maybe treat myself for Christmas? Got to think about it....
Sunday, 22 November 2009
Fixing someone else's code
All started with another round of "I'll organize myself!". This time get my books in order. Maybe music and movies as well, but books first. So been looking around the interwebs and the Ubuntu Software Center [sic], to find some suitable candidate for the job.
Alexandria popped up and seemed to have great reviews. It only does books, but seem to do it the way it should be, they said. Giving it a test-ride myself - man it was painful. The very first thing: it should be able to add books from Amazon (and other collections too, but those are tiny compared to that). Well, it doesn't. Just throws big hissfits when trying and that's all...
Now, instead of going and trying out a different program, what did I do? Grab git-svn, and check out the code. It's all ruby in the end... Poking around for about an hour, it turns out, that the library Alexandria uses processing Amazon's XML-formatted answer (Hpricot) just do some very very stupid things. It completely borks even simple XML structures. Not sure how this got through any testing, but probably it got through because there's no testing... And the problem is there for months, and even if there's a new version of Hpricot on Ubuntu, it's available for the next version (10.04 Lucid Lynx) but not the current one (9.10 Karmic Koala). Even Debian (Ubuntu's "daddy") has the new version...
So, there were three options that I could do:
Anyway after a few hours, it was working just fine, I could add books, I could add books with multiple authors, it had the pictures and all the goodies... Not a good-looking fix since there's still a lot of code there which I don't understand the purpose - since I never seen it ever properly work before... But it was a start. Happy, commit the changes, clean it up later.
Then I checked the original bug-report again, and someone just made Option 2 into reality, thus making all my efforts pretty much obsolete. Tried, and works...
Now, lessons from the story, and this fine weekend:
Alexandria popped up and seemed to have great reviews. It only does books, but seem to do it the way it should be, they said. Giving it a test-ride myself - man it was painful. The very first thing: it should be able to add books from Amazon (and other collections too, but those are tiny compared to that). Well, it doesn't. Just throws big hissfits when trying and that's all...
Now, instead of going and trying out a different program, what did I do? Grab git-svn, and check out the code. It's all ruby in the end... Poking around for about an hour, it turns out, that the library Alexandria uses processing Amazon's XML-formatted answer (Hpricot) just do some very very stupid things. It completely borks even simple XML structures. Not sure how this got through any testing, but probably it got through because there's no testing... And the problem is there for months, and even if there's a new version of Hpricot on Ubuntu, it's available for the next version (10.04 Lucid Lynx) but not the current one (9.10 Karmic Koala). Even Debian (Ubuntu's "daddy") has the new version...
So, there were three options that I could do:
- Just wait until someone comes up with a solution
- Repackage Hpricot for Ubuntu Karmic
- Change the code for something else than Hpricot
- Sure, but then where's the fun?
- Could do, but I'm pretty annoyed with the Ubuntu packaging system now, because I don't know it well enough to do even simple things. Tried, but failed. It's for another post, but it's a but too versatile and complicated for me at the moment. Keep trying some time, though....
- So this is what we gonna do, especially because there's another library, called Nokogiri. Seems to be pretty similar, except it was working.
Anyway after a few hours, it was working just fine, I could add books, I could add books with multiple authors, it had the pictures and all the goodies... Not a good-looking fix since there's still a lot of code there which I don't understand the purpose - since I never seen it ever properly work before... But it was a start. Happy, commit the changes, clean it up later.
Then I checked the original bug-report again, and someone just made Option 2 into reality, thus making all my efforts pretty much obsolete. Tried, and works...
Now, lessons from the story, and this fine weekend:
- XML seems to be fun - once understood, and a lot of possibilities to interact with websites if one can use it well. (you know XML and JSON and can handle most sites)
- Ruby seems to be fine but it's not different enough and too different from Python in the same time, that I don't yet see the point of trying even deeper. No offence, though.
- While so far I found Python-related projects have very extensive documentation, these to days I'm yet to find any comparably well documented Ruby project. It might just me, but the culture seems to be just a tad different.
- If I ever have some time, might do some benchmarking and if Nokogiri is really faster than Hpricot, might rewrite that code a bit better and see what those guys say.
- Even when this one bug was fixed, run into so many others. Not showstopper, but bad enough not to recommend using Alexandria just now. Maybe if the development picks up a little more...
- It's interesting to check out other people's code and even to be annoyed by it. That actually shows where I should improve as well. There's plenty of that....
- ... and lastly: maybe I should re-check my priorities, and spend my next weekend in a bit more useful. :P
Thursday, 19 November 2009
Bad Computer Day
I think I'm getting a bit old geek now, or I have too high expectations? Many days I just hope the computer I us would just work. Amazing, these days all three systems I use regularly have "Critical" or "Showstopper" level bugs that I cannot seem to be able to fix (yet).
And I didn't even mention that my EeePC just does not start - lights are on for the wireless and the power but nothing at all...
Maybe I just go out for a while and read a book or something....
- Windows XP SP3
For a while now, the system keeps being unresponsive, svchost.exe crashing and throwing "Access Violation" errors whever the machine is started. Well, 2 times out of 3. It's enough of a problem that makes the computer really-really annoying to use. Some people suggested it is because the machine is "too old". I call BS on that one.
Searching around there was some advisory from Microsoft, related to a problem of the AutoUpdate. Oh, great, all those auto-updating programs at startup...
It is pretty much what is said at KB927385 (You receive an error message after a Windows XP-based computer runs an automatic update, and you may be unable to run any programs after you close the "svchost.exe - Application Error" error message dialog box) and they have a few full page of solution, most of it is in the console. Great. The Windows version I have to use is in Chinese, so even greater. Never mind, GoogleImages is actually quite helpful in figuring out what setting means what. (related note: how much I wish for copy-pastable text on Windows dialogs and settings windows....) Okay, done that, the services.msc, the REGSVR32 the net start *whatever*, the everyting... No good...
Well, today I found some related advisories that I have to try, like KB932494, but it is still stressing out. - Ubuntu 9.10 (Karmic Koala)
I really want to love Ubuntu, I want it to succeed, I want to become abundant... That's why I was really cheering when I've found the Windows Ubuntu Installer (WUBI). The promise is great: people don't even have to jump head first into Linux, and they don't have to know years worth of Linux admin training to set up dual-boot.... WUBI downloads the correct CD image from the web, installs it on a windows partition and sets up proper dual-boot. Perfect!
Except, that some kernel or grub (the software that handles the boot process) updates keep making my system unbootable... What the hell? The machine works fine, run the updates, everything finishes fine, reboot - nothing. Sometimes even worse than nothing, don't like Kernel Panics at all, that's no better than the Blue Screen of Death... Not helpful, you are....
Done this three or four times: install, update, broken. Reinstall update, works, update, broken.... and so on.
Have nice little bug-report submitted, but the people who reply are in two categories so far: users who try to save their data but clueless about fixing the problem, and people who supposed to be maintainers but their comments are just plain wrong most of the time, and if even I know that, it must be very obvious... Yesterday that happend again. Not sure if I'll try to reinstall once more, or just give up for a while until there's some fix. Argh... - Arch Linux
My favourite "geek distro", that I enjoy using a lot, but they have a track record of doing updates that break lots of people's computers - for the sake of cutting edge. Many times I don't mind too much, and they have a very responsive Forum, but sometimes it's still just plain bad.... The latest grief is that some latter day update made my laptops touchpad unusable. In the logs I can see some complains of psmouse.c about the Synaptic touchpad, but after "trying to reconnect" there's nothing... I can plug in a USB mouse and that works, but come on!! That's why the touchpad is there, and it works under Windows just fine, so not some more hardware break....
And I didn't even mention that my EeePC just does not start - lights are on for the wireless and the power but nothing at all...
Maybe I just go out for a while and read a book or something....
Sunday, 7 June 2009
some thoughts on open source project deaths
I've been reading Joel on Software for quite a while now (few years back, as much as I can remember) and he had a post in 2000 that seems to be still the most popular read: Things You Should Never Do, Part I.
He was saying:
Exaile was a quite good music player (for my Linux system), had syncing with my iPod, internet radios, album art, music collections, plug-in system... Almost everything I needed was in their version 2.0.... But there were plenty of bugs and there were some issues that seemed to be hard to solve because of the design of the code. I thought there could be a little bug-fixing (yey, Exaile is written in Python) and it could be better and better, while keeping it's light weight. I was so wrong. By the time I started to use the program the developers already moved on and for version 3.0 they wanted to REWRITE the whole thing. Asked them about fixing bugs in 2.0. They said the 2.0 code is all but dead, nobody is working on it, so I should consider instead contributing to 3.0. The problem is that it is not very functional, even after years. There are so many features missing compared to 2.0, that I just gave up on using it (now rocking with RythmBox, but don't like that much)... Maybe I'll go back, but I'm still disappointed.
The other project was Mitter, a desktop Twitter reader-updater. Also written in Python, lightweight, easy to use, quite convenient. I really liked version 0.4.5. Then they decided to rewrite the whole thing once again. That much that at the moment it does not even run. They use version control, different branches for developement, but the top level "release" branch does not run... How useful is that? Since I liked to use it, I did a fork, and started to add a few changes to version 0.4.5 myself, like sending proper "in reply to" information, of spell-checking with gtkspell, and been trying to debug it so it can run on Win32 well too (just have to clear up some threading issues). These are all gradual changes and all what they seem to do can be done gradually too. I'm a bit said that it's like that, the Mitter developers seem to know their trade pretty well, and I'd love to learn much more from them by working together. But I don't want to wait for the whole interface to be fixed just to have "in reply to"....
Am I too impatient? Unfortunately my attitude to programming is as of puzzle solving. There's a problem that needs a solution. When the solution is found it is not very interesting anymore. That makes me a pretty bad maintainer, but and enthusiastic (even if not very good) coder. But seeing these projects commiting the Netscape-error again, and potentially losing users and developers over it - that's not much fun. I have no influence over what they do, and the "advantage" of open source is that, just as I did, one can fork a project and try to do a better job. And in the meantime everyone loses....
The availability of better (more user-friendly) programming tools and languages, tutorials, information and means to collaborate made it easier to make some amazing applications. But it didn't make people to be better programmers, unfortunately... Let's see of these two can achieve their "second-coming".... Hope so because once I felt a lot of affection for them. :)
He was saying:
[Why Netscape died? They committed] the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.The argument goes that by writing from scratch you put yourself at disadvantage at many different levels.
- Don't have any code to ship while rewriting, that can be a long time (I might add Hofstadter's Law to be considered). You lose customers/users and give an advantage to all your competitors.
- You are basically choosing a level of technology that you are aiming at, and cannot keep up with the new developments while you are writing.
- You lose the advantage of previously fixed bugs, thus potentially committing the same mistakes again.
- You throw out a lot of knowledge as well, good algorithms, solutions, etc....
Exaile was a quite good music player (for my Linux system), had syncing with my iPod, internet radios, album art, music collections, plug-in system... Almost everything I needed was in their version 2.0.... But there were plenty of bugs and there were some issues that seemed to be hard to solve because of the design of the code. I thought there could be a little bug-fixing (yey, Exaile is written in Python) and it could be better and better, while keeping it's light weight. I was so wrong. By the time I started to use the program the developers already moved on and for version 3.0 they wanted to REWRITE the whole thing. Asked them about fixing bugs in 2.0. They said the 2.0 code is all but dead, nobody is working on it, so I should consider instead contributing to 3.0. The problem is that it is not very functional, even after years. There are so many features missing compared to 2.0, that I just gave up on using it (now rocking with RythmBox, but don't like that much)... Maybe I'll go back, but I'm still disappointed.
The other project was Mitter, a desktop Twitter reader-updater. Also written in Python, lightweight, easy to use, quite convenient. I really liked version 0.4.5. Then they decided to rewrite the whole thing once again. That much that at the moment it does not even run. They use version control, different branches for developement, but the top level "release" branch does not run... How useful is that? Since I liked to use it, I did a fork, and started to add a few changes to version 0.4.5 myself, like sending proper "in reply to" information, of spell-checking with gtkspell, and been trying to debug it so it can run on Win32 well too (just have to clear up some threading issues). These are all gradual changes and all what they seem to do can be done gradually too. I'm a bit said that it's like that, the Mitter developers seem to know their trade pretty well, and I'd love to learn much more from them by working together. But I don't want to wait for the whole interface to be fixed just to have "in reply to"....
Am I too impatient? Unfortunately my attitude to programming is as of puzzle solving. There's a problem that needs a solution. When the solution is found it is not very interesting anymore. That makes me a pretty bad maintainer, but and enthusiastic (even if not very good) coder. But seeing these projects commiting the Netscape-error again, and potentially losing users and developers over it - that's not much fun. I have no influence over what they do, and the "advantage" of open source is that, just as I did, one can fork a project and try to do a better job. And in the meantime everyone loses....
The availability of better (more user-friendly) programming tools and languages, tutorials, information and means to collaborate made it easier to make some amazing applications. But it didn't make people to be better programmers, unfortunately... Let's see of these two can achieve their "second-coming".... Hope so because once I felt a lot of affection for them. :)
Labels:
exaile,
joel on software,
mitter,
open source,
programming
Saturday, 16 May 2009
facebook's walled garden is still walled with open API
I'm not sure why, but recently got on Twitter again. But I don't like to keep a website always open in my browser, much more convenient to have a desktop app do the useful work for me. Personal preference, I guess... Going minimalistic, I found a reasonably nice one for the job, called Mitter.
Then I got thinking, since I spend much more time on Facebook (FB), why not have the same thing for that? An app that let me see my friends' status, comment on them, update my own, see their comments on the update, these kinds of things.... FB has its own API, so there should be so many opprotunitiesn to do just this. Well, cannot say I found many... I heard that Tweetdeck will have some integration, but that's just too big for me - don't want to install all that Adobe AIR at the moment for this (and wouldn't have that much space for that on my EeePC anyway).
So what should we do? Check out how to make one ourselves... Actually, switching out Mitter's internals and replacing it with FB calls could work - in theory. But replacing something written by another person is always more complicated than one thinks at first. That leaves "from skratch".
Next step, check out the FB API documentation. Yeah, status.set is for example the first thing I want to be able to do. How to get around to do that?
Well - nothing is as easy as it seems. Twitter lets you use any app you desire for any purpose. You can just hack together your own and distribute it, and it will be just fine....
FB, however wants to control exactly what happens, who have access, and how much. And in the process, it makes certain apps impossible.
To illustrate what I'm talking about, here's the process I figured out, how to use a simple desktop app, writen in Python, to update my status:
Now the things that bug me:
Never mind, at the moment there's a very tiny and badly written, but functioning version of my status updater minibook. The source can be found at the minibook github repo. Does not do much, log you in, able to send status updates. It is badly written because it's basically a gtk.Entry example, modified. So any comment is appreciated, but don't think it will stay like this for long. If you feel like you can even fork it and help making it better... At the moment it works under Linux, but haven't tried under Windows. Will try that next time I get around to boot into Windows ;) .
In the end I got my twitteresk status updates, from my own app, but only with jumping hoops and not really being able to share the results with many people. Just following the usual motto: "Why do you do it? Because I can."
Then I got thinking, since I spend much more time on Facebook (FB), why not have the same thing for that? An app that let me see my friends' status, comment on them, update my own, see their comments on the update, these kinds of things.... FB has its own API, so there should be so many opprotunitiesn to do just this. Well, cannot say I found many... I heard that Tweetdeck will have some integration, but that's just too big for me - don't want to install all that Adobe AIR at the moment for this (and wouldn't have that much space for that on my EeePC anyway).
So what should we do? Check out how to make one ourselves... Actually, switching out Mitter's internals and replacing it with FB calls could work - in theory. But replacing something written by another person is always more complicated than one thinks at first. That leaves "from skratch".
Next step, check out the FB API documentation. Yeah, status.set is for example the first thing I want to be able to do. How to get around to do that?
Well - nothing is as easy as it seems. Twitter lets you use any app you desire for any purpose. You can just hack together your own and distribute it, and it will be just fine....
FB, however wants to control exactly what happens, who have access, and how much. And in the process, it makes certain apps impossible.
To illustrate what I'm talking about, here's the process I figured out, how to use a simple desktop app, writen in Python, to update my status:
- Create a new application on FB - Developers. When done, set its type to "desktop".
- On your app's page, note the "API Key" and "Application Secret" (Secret Key). The API Key identifies your app, the Secret Key is needed for your app to make API calls. As much as I could figure it out, this is so that your app does not get hijacked by someone just by knowing your API Key (that is public). But the need for this Secret Key makes all the difference (more of it later)
- Setup the permission for this app to update your status. See Extended Permissions in the wiki. More specifically, point your browser to a special web address, and set the permissions there. After that you can remove permissions from the usual settings tab, but there at the settings there is no list of all available permissions, only those that your app specifically asked for. I guess "publish_stream" and "read_stream" should be enough most of the time.
Now the things that bug me:
- Since you have to keep your Secret Key, well, secret (by the terms of service as well, I think), one cannot really make an open source desktop app - it wouldn't be able to make any API calls without the key... In the wiki some people argued that even closed source apps are safe at all - just because you compile something it does not mean that hackers cannot rewerse engineer it... What could one do?
Suggestions include e.g. to have a web interface for your desktop app that would handle the login and the login only. This is doubly annoying, since I didn't want to write a web app, but a desktop one, and the login authentication will be still done by FB anyway - so a lot of complexity for nothing.
Other suggestion is that all your users have to have their own version of the app, with their own API and Secret Keys, and everything... That pretty much rules out all of the non-geeks, and say bye-bye to any branding or community dreams... - The login is always handled by FB. No way around it. Have to have a browser, your app has to go to a special address and get some info back from FB (a "session key") to be able to operate...
Never mind, at the moment there's a very tiny and badly written, but functioning version of my status updater minibook. The source can be found at the minibook github repo. Does not do much, log you in, able to send status updates. It is badly written because it's basically a gtk.Entry example, modified. So any comment is appreciated, but don't think it will stay like this for long. If you feel like you can even fork it and help making it better... At the moment it works under Linux, but haven't tried under Windows. Will try that next time I get around to boot into Windows ;) .
In the end I got my twitteresk status updates, from my own app, but only with jumping hoops and not really being able to share the results with many people. Just following the usual motto: "Why do you do it? Because I can."
Wednesday, 24 December 2008
The version control mess I got myself into.
I didn't know what did I start when I took Getting Things Done off my shelf once again... I read about 1/3 of it a few years ago and now starting from the beginning again, maybe because seeing the success other people are having with it...
What is the problem? Well, every day I spend about 6-8 hours (at least!) with computers, so the organization has to include all my "digital pockets". Yeah, right... I have two laptops, one desktop that I use occasionally, one computer at work, an Openmoko (which is also almost like a computer, sweet Linux power), 3-4 usb pens, 2 external hard drives... That's a lot of stuff to keep synchronized and organized...
And it didn't help the cause that I might have jumped into the things pretty deep right away. So, for example, the geeks we are, let's use a version control system (VCS) for our data! Okay, which one? Of the main ones, SVN, CVS are old (to me) so nope, and what's left is Git, Bazaar and Mercurial. The first is very popular, the second is getting popular, and don't know much about the third one (or any other). But if I manage my backup with version control, what about my occasional programming projects? Should it be the same/different version control? Ideally it should be the same, because it will give me less headaches, incomatibilities, less new things to learn.... But then maybe I will want to share my programming with the world, so let's check out the source code hosting sites!
Well, the two biggest ones seem to be Repo.or.cz, GitHub and Launchpad. And of course they use different VCS, Git by the first two, Bazaar by the last one... Okay, let's look at the features. GitHub is social and it has stats, and shiny but only 100MB hosting. Repo.or.cz is very-very simple and unlimited. Launchpad is unlimited, has bug-tracker, forums, feature planning, software translation.... Okay, let's see then what projects that I know uses what: Git by the Linux Kernel (of course, Linus wrote Git exactly for this purpose), Android, Cairo, HAL, D-Bus, Perl5, Samba,VLC, Fedora.... Bazaar by Exaile, MySQL, Ubuntu..... So there are more interesting things on Git (for me at least).... But, but, but....
Ah.... This start to feel like a question like whether we should go to have Japanese or Indian for dinner. Both are great, similar but not compatible, I have fav dishes in both, and they both have major fanbase who will tell you how much better one is then the other. And both of them can stain your shirt if you are careless (I have amazing ability to bork computer things)... ;)
Sooooooooooooo..... From getting organized I ended up being in a philosphical spiral without the possibility of a simple "This/That" answer in the end.... Great. Let's just choose one cuisine and have dinner already.
I started with Git, then because for a bit it looked easy (or easier). Well, very quickly it got confusing. When I modify things on two computers and try to reconcile the data, mor often then not I run into "X would be overwritten by update, cannot merge", and "Y is not uptodate (sic), cannot pull changes" and so on... Damn, I already spent a week reading documentations, and I was writing my own little notes, and still it ends up being a mess and I have to spend time to manually do things. So more learning ahead, with the danger that the more I have to learn now, the more I can potentially forget and bork later...
Also, I don't want to have a single big repository, but thematically shorted smaller ones. To sync them all manually is pretty tedious. Fortunately Google found that as well when making Android (which literally has tens of Git repos) and wrote a new program for it, called Repo. Though I think it relies on an older version of Python, and my Arch Linux is not known to be nice to keep an version but the bleeding edge (sometimes even loosing a lot of blood before getting patched - {bad} pun intended). Thus this requires a little bit more effort in setting up.
This makes Git (and as much as I checked, all current VCS) pretty much unsuitable for the laymen. For example I'd love get my girlfriend to put her SAS programming into version controlling, so she won't lose her data in the way she did a few times. Yeah, show her the Windows Git interface (she's on XP & Vista) and all she's gonna say: "Oh, kill me now..." Too bad.
So now after all this ranting, the way forward is the way back. Going back to the manuals and check the situations that now I know I will encounter, and then to the drawing board again and make up a plan (David Allen would be proud). But I won't give it up, I think having any backup is better then being completely without.
What is the problem? Well, every day I spend about 6-8 hours (at least!) with computers, so the organization has to include all my "digital pockets". Yeah, right... I have two laptops, one desktop that I use occasionally, one computer at work, an Openmoko (which is also almost like a computer, sweet Linux power), 3-4 usb pens, 2 external hard drives... That's a lot of stuff to keep synchronized and organized...
And it didn't help the cause that I might have jumped into the things pretty deep right away. So, for example, the geeks we are, let's use a version control system (VCS) for our data! Okay, which one? Of the main ones, SVN, CVS are old (to me) so nope, and what's left is Git, Bazaar and Mercurial. The first is very popular, the second is getting popular, and don't know much about the third one (or any other). But if I manage my backup with version control, what about my occasional programming projects? Should it be the same/different version control? Ideally it should be the same, because it will give me less headaches, incomatibilities, less new things to learn.... But then maybe I will want to share my programming with the world, so let's check out the source code hosting sites!
Well, the two biggest ones seem to be Repo.or.cz, GitHub and Launchpad. And of course they use different VCS, Git by the first two, Bazaar by the last one... Okay, let's look at the features. GitHub is social and it has stats, and shiny but only 100MB hosting. Repo.or.cz is very-very simple and unlimited. Launchpad is unlimited, has bug-tracker, forums, feature planning, software translation.... Okay, let's see then what projects that I know uses what: Git by the Linux Kernel (of course, Linus wrote Git exactly for this purpose), Android, Cairo, HAL, D-Bus, Perl5, Samba,VLC, Fedora.... Bazaar by Exaile, MySQL, Ubuntu..... So there are more interesting things on Git (for me at least).... But, but, but....
Ah.... This start to feel like a question like whether we should go to have Japanese or Indian for dinner. Both are great, similar but not compatible, I have fav dishes in both, and they both have major fanbase who will tell you how much better one is then the other. And both of them can stain your shirt if you are careless (I have amazing ability to bork computer things)... ;)
Sooooooooooooo..... From getting organized I ended up being in a philosphical spiral without the possibility of a simple "This/That" answer in the end.... Great. Let's just choose one cuisine and have dinner already.
I started with Git, then because for a bit it looked easy (or easier). Well, very quickly it got confusing. When I modify things on two computers and try to reconcile the data, mor often then not I run into "X would be overwritten by update, cannot merge", and "Y is not uptodate (sic), cannot pull changes" and so on... Damn, I already spent a week reading documentations, and I was writing my own little notes, and still it ends up being a mess and I have to spend time to manually do things. So more learning ahead, with the danger that the more I have to learn now, the more I can potentially forget and bork later...
Also, I don't want to have a single big repository, but thematically shorted smaller ones. To sync them all manually is pretty tedious. Fortunately Google found that as well when making Android (which literally has tens of Git repos) and wrote a new program for it, called Repo. Though I think it relies on an older version of Python, and my Arch Linux is not known to be nice to keep an version but the bleeding edge (sometimes even loosing a lot of blood before getting patched - {bad} pun intended). Thus this requires a little bit more effort in setting up.
This makes Git (and as much as I checked, all current VCS) pretty much unsuitable for the laymen. For example I'd love get my girlfriend to put her SAS programming into version controlling, so she won't lose her data in the way she did a few times. Yeah, show her the Windows Git interface (she's on XP & Vista) and all she's gonna say: "Oh, kill me now..." Too bad.
So now after all this ranting, the way forward is the way back. Going back to the manuals and check the situations that now I know I will encounter, and then to the drawing board again and make up a plan (David Allen would be proud). But I won't give it up, I think having any backup is better then being completely without.
Monday, 28 July 2008
Wow, did Gmail hear me by any chance?
Just the other day I was complaining about web security, and how Gmail is almost good. Secure login but might be insecure browsing afterwards...
Today, checking my email, the Ad Bar (yeah, I actually read ads sometimes there) came up with this: Official Gmail Blog - Making security easier. I don't really think I had anything to do with the change (would be nice, though:) but the important thing is that they seem to try. Me happy.... Now stop checking those emails and get back to work....
Today, checking my email, the Ad Bar (yeah, I actually read ads sometimes there) came up with this: Official Gmail Blog - Making security easier. I don't really think I had anything to do with the change (would be nice, though:) but the important thing is that they seem to try. Me happy.... Now stop checking those emails and get back to work....
Sunday, 27 July 2008
Adventures with Windows security
Windows security really drives me nuts... The whole thing. It's not that it should be easier, only it shouldn't be impossible, and this unnerving.
I had to reinstall my Windows XP recently, due to a failed hard drive. I could use a back-up but it was pretty slow recently (but no other problems), so I thought, give it a fresh start. Of course Acer didn't supply any install CDs with my computer, so let's download one from the Web and use that shiny, holographic, temper proof Windows XP(TM) serial number, attached to my laptop case with superglue (i think). Yeah, right, of course the install CD told me that it is an invalid code... So here you go, I have a code but I'm a pirate, forced to use some knock off code again from the web, and we are not at security yet....
Where I was, is a fresh new install, pulling down all he necessary software updates, new Firefox, and let's get started with digging the trenches against the invading forces...
Firewall, I need one for sure... I ended up having COMODO Firewall Pro, which is free for personal use. I had two previous generations of this program (one still on Win98, and one on the previous install), and I was glad to see, that they made some effort and it loos much better now, more logical - even if the amount of possible settings could make your head spin...
One thing was different - included "Proactive Defence". What it does is checking every single operation that any and all running software does, against some malware blocking criteria, or such. In the end, it is just prompting you 10 times a minute, that:
"XYzw.dll" is trying to use "AbCD.com" for an unidentified purpose. If you thing it is a safe operation, click authorise.
Or:
"Blabla_Nice_Program.exe" is modifying the registry entry "HKLM/Software/Run/Currentrun/OMG/BBQ/WTF/", do you authorise? Well, we no longer say, instead we say affirmative...
How would ANYONE really know what to do with EVERY program? Is it alright if "system.exe" uses "explorer"? No, what ends up being is click, to "authorise", "authorise", "authorise".... So, does it protect?
I assume not. One day into the new setup, i was no longer search Google, Yahoo or Altavista. MSN was there (but no, I'm not using that for search). The answer was always "waiting for reply". No direct going to their sites, no using the searchbar in Firefox... Gmail was working and iGoogle was there, so it must be a problem with my machine not with the tubes. Fortunately there's a Terminal Server I can log in at the office, so I can look for info on this strange behaviour. Apparently there's a trojan called Qhost, which would do something similar. Download the Symantec removal tool for Qhost - nope, nothing. Look a little bit further, use carpet bombing instead of precision sniper attack, so let's get a Spyware removal. Yeah, which one? In the end again I settled for Spybot Search&Destroy. It's pretty minimalistic, and in many corners it looks as free software would look (yeah, free once more...), but apparently it does it's job...
After 20 minutes of crunching away, it came back with the diagnosis: you have Virtumonde. "Web access may also be negatively affected. Vundo may cause many websites to be unaccessible; these websites will just hang." Yeah, exactly.... Let's remove... Done.... Wow, everything works again! Great....
So, in the aftermath I just disabled the defence feature of the firewall, as it was proven pretty useless. Kept Spybot and "immunized" my system. It does a few clever-looking tricks that could cause problem sometimes later but might work: e.g. redirecting the DNS queries for known malware websites to 127.0.0.1, which makes them unable to function. We'll see how this would work in practice. And also, I'm looking for an anti-virus program. AVG Free Edition (been there, done that), Moon Secure Antivirus (it was pretty crappy when I tried, and slowed evvvvrrrryytthinng down), Avira Antivir Workstation (going to try this one now. I think I had some years ago, but let's see what it can do nowadays).
But the whole thing is just so annoying. The Windows Registry. The Windows services and system files - when the same file does a dozen different functions, and half a dozen copies are running in the same time. When there's no way to know what's a malicious attempt, and what's a legitimate request from a software.....
If this happens on my parents computer and I have to distance-diagnose it, I'd go nuts and they wouldn't have a working system for quite a while.
Anyway, my feeling is that probably I'm more lame that I thought (come on, getting infected on the first day!!!) and that even if Linux has tens and hundreds of annoying things (subject of many future posts, probably), those annoyances now feel more manageable, more transparent, and more familiar... I'm really looking forward to the day of my complete switch, when I don't have to worry about this many firewalls/spyware/virus/malware things. I'd rather fight software bugs.....
Now, just switching off the Internet, take a book that I wanted to read for a while, and let's go outside.... maybe a computer virus infection does have a positive side....
I had to reinstall my Windows XP recently, due to a failed hard drive. I could use a back-up but it was pretty slow recently (but no other problems), so I thought, give it a fresh start. Of course Acer didn't supply any install CDs with my computer, so let's download one from the Web and use that shiny, holographic, temper proof Windows XP(TM) serial number, attached to my laptop case with superglue (i think). Yeah, right, of course the install CD told me that it is an invalid code... So here you go, I have a code but I'm a pirate, forced to use some knock off code again from the web, and we are not at security yet....
Where I was, is a fresh new install, pulling down all he necessary software updates, new Firefox, and let's get started with digging the trenches against the invading forces...
Firewall, I need one for sure... I ended up having COMODO Firewall Pro, which is free for personal use. I had two previous generations of this program (one still on Win98, and one on the previous install), and I was glad to see, that they made some effort and it loos much better now, more logical - even if the amount of possible settings could make your head spin...
One thing was different - included "Proactive Defence". What it does is checking every single operation that any and all running software does, against some malware blocking criteria, or such. In the end, it is just prompting you 10 times a minute, that:
"XYzw.dll" is trying to use "AbCD.com" for an unidentified purpose. If you thing it is a safe operation, click authorise.
Or:
"Blabla_Nice_Program.exe" is modifying the registry entry "HKLM/Software/Run/Currentrun/OMG/BBQ/WTF/", do you authorise? Well, we no longer say, instead we say affirmative...
How would ANYONE really know what to do with EVERY program? Is it alright if "system.exe" uses "explorer"? No, what ends up being is click, to "authorise", "authorise", "authorise".... So, does it protect?
I assume not. One day into the new setup, i was no longer search Google, Yahoo or Altavista. MSN was there (but no, I'm not using that for search). The answer was always "waiting for reply". No direct going to their sites, no using the searchbar in Firefox... Gmail was working and iGoogle was there, so it must be a problem with my machine not with the tubes. Fortunately there's a Terminal Server I can log in at the office, so I can look for info on this strange behaviour. Apparently there's a trojan called Qhost, which would do something similar. Download the Symantec removal tool for Qhost - nope, nothing. Look a little bit further, use carpet bombing instead of precision sniper attack, so let's get a Spyware removal. Yeah, which one? In the end again I settled for Spybot Search&Destroy. It's pretty minimalistic, and in many corners it looks as free software would look (yeah, free once more...), but apparently it does it's job...
After 20 minutes of crunching away, it came back with the diagnosis: you have Virtumonde. "Web access may also be negatively affected. Vundo may cause many websites to be unaccessible; these websites will just hang." Yeah, exactly.... Let's remove... Done.... Wow, everything works again! Great....
So, in the aftermath I just disabled the defence feature of the firewall, as it was proven pretty useless. Kept Spybot and "immunized" my system. It does a few clever-looking tricks that could cause problem sometimes later but might work: e.g. redirecting the DNS queries for known malware websites to 127.0.0.1, which makes them unable to function. We'll see how this would work in practice. And also, I'm looking for an anti-virus program. AVG Free Edition (been there, done that), Moon Secure Antivirus (it was pretty crappy when I tried, and slowed evvvvrrrryytthinng down), Avira Antivir Workstation (going to try this one now. I think I had some years ago, but let's see what it can do nowadays).
But the whole thing is just so annoying. The Windows Registry. The Windows services and system files - when the same file does a dozen different functions, and half a dozen copies are running in the same time. When there's no way to know what's a malicious attempt, and what's a legitimate request from a software.....
If this happens on my parents computer and I have to distance-diagnose it, I'd go nuts and they wouldn't have a working system for quite a while.
Anyway, my feeling is that probably I'm more lame that I thought (come on, getting infected on the first day!!!) and that even if Linux has tens and hundreds of annoying things (subject of many future posts, probably), those annoyances now feel more manageable, more transparent, and more familiar... I'm really looking forward to the day of my complete switch, when I don't have to worry about this many firewalls/spyware/virus/malware things. I'd rather fight software bugs.....
Now, just switching off the Internet, take a book that I wanted to read for a while, and let's go outside.... maybe a computer virus infection does have a positive side....
Subscribe to:
Posts (Atom)