When I read this blog entry I could not resist to post a big warm
ACK!
The theory works quite well here in Switzerland too.
When I read this blog entry I could not resist to post a big warm
ACK!
The theory works quite well here in Switzerland too.
Refactoring is a cool thing to do: You go back to the drawing-board and redesign some parts of your application so that they fit better to the new requirements building up over time. Sometime you take old code and restructure it, sometime you just rewrite the functionality in question (or even the whole application, but I don’t count this as refactoring any more)
Code always has the tendency to get messy over time as new requirements arise and must be implemented on the basis of existing code. Not even the most brilliant design can save your code. It’s impossible to know what you are going to do in the future with your code.
Let’s say you have an application that is about orders. Orders with ordersets that somehow get greated and then processed. Now let’s say you create quite an usable model of your order and ordersets. Very well. It’s nice, it’s clean and it works.
And now comes the customer and over the years new features are added, let’s call it an inventory mode. You notice that these new inventory records have quite a lot in common with your orders, so you reuse them, but add some features.
Now full stop! It already happened. Why on earth are you reusing the old code and “just adding features”? That’s not the way to go. The correct solution would be to abstract away the common parts of your order and inventory records to something like TProductContainer (using Delphi naming conventions here) which has two descendants TOrder and TInventoryRecord.
But this comes at a cost: It requires time. It requires quite some steps:
Now try to convince the project-manager or even your customer that implementing the required feature can be done in x days, but you’d like to do it in x*2 days because that would be cleaner. The answer would be another question like: “If you do it in x days, will it work?”. You’ll have to answer “yes”, in the end. So you will be asked “if you do it in x*2 days, will it work better than in x days?” and you’d have to answer “No” as the whole sense in cleaning up messy code is to keep it running just the same.
So, in the end those things will accumulate until it cannot be put away any longer and the refactoring has to be done no matter what, just because implementing the feature uses x days plus y days just for understanding the mess you have created over time. y being 2x or so.
The mean thing is: The longer you wait doing the inevitable, the longer you will have to fix it, so in the end, it should always be the x*2 way – if only those noneducated people would understand.
I think PHP scales well because Apache scales well because the Web scales well. PHP doesn’t try to reinvent the wheel; it simply tries to fit into the existing paradigm, and this is the beauty of it.
Read on shiflett.org after a small pointer by Slashdot into the right direction. This guy really knows what he is writing – or at least it seems to me as I think exactly the same way as he does (which is a somewhat arrogant way of saying things, I suppose :-)).
Seing this in my referrer-log and seing that Robert commenting here is in the PostgreSQL-Team too, I come to the conclusion that someone of the Postgres-Team with obviously enough influence to propose links to the weekly newsletter seems to be reading my humble blog.
Thank you for mentioning my posting in your weekly news. That was very kind.
For quite some time now I heard about customers telling me that PopScan seems to be having problems with proxy servers using NTLM authentication. I knew that and I told everyone that this is not supported.
But I could not understand it: Why did it not work. I mean, I went from my own HTTP-Routines to WinInet just to be able to use the system-wide proxy server settings and connections
When using WinInet and INTERNET_OPEN_TYPE_PRECONFIG with InternetOpen, the whole thing is supposed to just work – as long as IE itself does work. But in my application this wasn’t the case and I had no idea why. As soon as NTLM was enabled at the proxy, I was just getting a 407 HTTP_PROXY_AUTHENTICATION_REQUIRED status from the proxy, despite the correct password being used
MSDN was of help (taken from the documentation of InternetOpenRequest):
If authentication is required, the INTERNET_FLAG_KEEP_CONNECTION flag should be used in the call to HttpOpenRequest. The INTERNET_FLAG_KEEP_CONNECTION flag is required for NTLM and other types of authentication in order to maintain the connection while completing the authentication process
I’ve added this flag (and some more – now that I already was at it), recompiled, tested and -yes- finally it does what it should: It works just out of the box. No more 407, no more entering password for the users. One more thing that switched its state from “not supported” to “supported and working splendidly”.
This is with a NTLM-enabled Squid Proxy, but it should work with Microsoft ISA too.
I’ve already posted about this site with its speedruns for old console games. What I did not know back then is that these videos are created using slow motion and savestates which makes them look so expectionally good (if you are up for movies not using savestates, then this is for you).
Though the videos are made with savestates, they are extremely fun to watch, so bisqwits page is one of those I have been visiting every day just to look for updates. Recently it went all quiet…
And today I see what was bisquwit keeping from posting new movies: The whole page got redesigned (on a WIKI-basis) and SNES and Genesis (Mega Drive here in Europe) Movies were added. Very nice. My Bittorrent client is already hard-working ;-)
Yesterday, when I was reading through old entires here on gnegg.ch, it came to me that I have never really styled the comments-section of my postings during the redesign. I’ve taken the old MT-Template and style definitions and let it rest at that.
I wanted to change that and so I did:
#comment-form{
display: show !important;
}
I like this solution quite a lot. The entries are quite less cluttered that way. What do you think?
While looking for some documentation for improving my comments-system (later post), I came across a link to this blog entry that announces a revised licensing scheme for Movable Type 3.0.
This time they actually did it right: The (still) free edition is now clearly announced. The personal edition is what quite a lot of users (including myself) have wanted (unlimited blogs) and it is quite affordable. This is nice.
Thank you, Movable Type
I have a server (running gnegg.ch) with 1.5 GBytes of RAM and I’m running Gentoo Linux (another candidate for my all-time favourites list, but it’s still too soon for that. I’m only working with it for a little bit more than one year). And as I wanted the thing to be as secure as possible, I created a kernel from scratch without module support.
What I’ve always asked myself is why the heck “free” just lists 896 Mbytes of available memory:
galadriel root # free -m
total used free shared buffers cached
Mem: 885 193 692 0 6 69
-/+ buffers/cache: 117 768
Swap: 976 0 976
At first I had a BIOS problem in mind, bit after having seen GRUB recognizing the whole amount of memory, I came to the conclusion that there must be some problem in the kernel
As 2.6 is still quite new, I waited for the next gentoo-dev-sources to be released which happened somewhere around today. With the new kernel the problem still existed, so I dug deeper
dmesg output something like this in its first lines:
Warning only 896MB will be used. Use a HIGHMEM enabled kernel.
Though I misread the second line as a status message (stating that HIGHMEM is being used) instead of a request, I entered the above message to Google Groups and found out that the second line indeed is the solution to the problem
In Processor type and features, set High Memory Support to 4GB and recompile your kernel.
What I don’t understand: I’m having this problem with 1.5GB of RAM and this option seemed to me like talking about 4 GB. But Google was helpful like most of the time, enabling me to virtually double the available RAM
galadriel root # free -m
total used free shared buffers cached
Mem: 1520 333 1186 0 12 158
-/+ buffers/cache: 162 1358
Swap: 976 0 976
Nice! Isn’t it?
Update: For those that have not yet noticed it: The title of this entry does hint at products like this, though this one is at least honest in its description.
Who doesn’t have them? Those all-time favourite tools. It’s not just software, it’s passion. Those tools are tools that you always have to use. Tools where all objectivity seems to fade away when it comes to making recommendations. Tools where you actively monitor (or even participate in) the developement. Tools where you, though they are free, gladly donate some money. Tools you love.
Of course, I too know of some tools. And this is my list (in no particular order):
And you? Do you have such tools in your toolbox? Do you use the words love and software in the same phrase? I certainly do!