Windows 2008 Server and IIS 7 proved to be a real challenge in moving my old IIS 5 applications that were running on Windows XP Pro. Just when I thought everything was running smoothly, I ran into a file size upload limit which I thought I had already adjusted.
In the older IIS, I added the following code to <system.web> in the web.config file in the web application’s folder:
This allows a file upload of 2,000,000 kilobytes and it will time out after 100,000 seconds, or 27.8 hours. I think that if someone spends more than 28 hours uploading a file, they should send me a DVD instead.
Adjusting File Size Limit in IIS 7
The problem is that in IIS 7 on Windows 2008 Server, the web application will reject any file that is larger than 30 MB. This is a default limitation of IIS. You can increase the maximum file size by adding the following code to <system.webServer> in the web.config file:
Its interesting because the author writes about a programmer named Mel who created a highly optimized application in machine code. These are the binary instructions that are directly executed by computers. Its what programming languages create when source code is compiled. Machine code is the lowest level language and its written in binary or hexadecimal numbers.
Mel didn’t like assembly language (which is one step above machine code) because it couldn’t optimize code as well as he could.
People today avoid assembly language because its claimed that compilers are powerful enough to optimize as well if not better than it can. C++ does a good job, C is even better, but if you truly want high performance code, assembly language is a must. There is no language today that can optimize as well as a skilled assembly language programmer can do.
The advantage of high level languages is that you can develop code faster and with fewer bugs. These languages also make it much easier creating complex user interfaces for modern operating systems. Its estimated that only about 20% of your source code needs to be optimized. Using assembly language functions in this 20% can give a significant boost to software performance.
The article is long, but its worth reading.
A recent article devoted to the *macho* side of programming made the bald and unvarnished statement:
Real Programmers write in FORTRAN.
Maybe they do now, in this decadent era of Lite beer, hand calculators, and “user-friendly” software but back in the Good Old Days, when the term “software” sounded funny and Real Computers were made out of drums and vacuum tubes, Real Programmers wrote in machine code. Not FORTRAN. Not RATFOR.Â Not, even, assembly language. Machine Code. Raw, unadorned, inscrutable hexadecimal numbers. Directly.
Lest a whole new generation of programmers grow up in ignorance of this glorious past, I feel duty-bound to describe, as best I can through the generation gap, how a Real Programmer wrote code. I’ll call him Mel, because that was his name.
I first met Mel when I went to work for Royal McBee Computer Corp., a now-defunct subsidiary of the typewriter company. The firm manufactured the LGP-30, a small, cheap (by the standards of the day) drum-memory computer, and had just started to manufacture the RPC-4000, a much-improved, bigger, better, faster — drum-memory computer. Cores cost too much, and weren’t here to stay, anyway. (That’s why you haven’t heard of the company, or the computer.)
I had been hired to write a FORTRAN compiler for this new marvel and Mel was my guide to its wonders. Mel didn’t approve of compilers.
“If a program can’t rewrite its own code”, he asked, “what good is it?”
Mel had written, in hexadecimal, the most popular computer program the company owned. It ran on the LGP-30 and played blackjack with potential customers at computer shows. Its effect was always dramatic. The LGP-30 booth was packed at every show, and the IBM salesmen stood around talking to each other. Whether or not this actually sold computers was a question we never discussed.
Mel’s job was to re-write the blackjack program for the RPC-4000. (Port?Â What does that mean?) The new computer had a one-plus-one addressing scheme, in which each machine instruction, in addition to the operation code and the address of the needed operand, had a second address that indicated where, on the revolving drum, the next instruction was located.
In modern parlance, every single instruction was followed by a GO TO! Put *that* in Pascal’s pipe and smoke it.
Mel loved the RPC-4000 because he could optimize his code: that is, locate instructions on the drum so that just as one finished its job, the next would be just arriving at the “read head” and available for immediate execution. There was a program to do that job, an “optimizing assembler”, but Mel refused to use it.
“You never know where it’s going to put things”, he explained, “so you’d have to use separate constants”.
It was a long time before I understood that remark. Since Mel knew the numerical value of every operation code, and assigned his own drum addresses, every instruction he wrote could also be considered a numerical constant. He could pick up an earlier “add” instruction, say, and multiply by it, if it had the right numeric value. His code was not easy for someone else to modify.
I compared Mel’s hand-optimized programs with the same code massaged by the optimizing assembler program, and Mel’s always ran faster. That was because the “top-down” method of program design hadn’t been invented yet, and Mel wouldn’t have used it anyway. He wrote the innermost parts of his program loops first, so they would get first choice of the optimum address locations on the drum. The optimizing assembler wasn’t smart enough to do it that way.
Mel never wrote time-delay loops, either, even when the balky Flexowriter required a delay between output characters to work right. He just located instructions on the drumÂ so each successive one was just *past* the read head when it was needed; the drum had to execute another complete revolution to find the next instruction. He coined an unforgettable term for this procedure. Although “optimum” is an absolute term, like “unique”, it became common verbal practice to make it relative: “not quite optimum” or “less optimum” or “not very optimum”. Mel called the maximum time-delay locations the “most pessimum”.
After he finished the blackjack program and got it to run (“Even the initializer is optimized”, he said proudly), he got a Change Request from the sales department. The program used an elegant (optimized) random number generator to shuffle the “cards” and deal from the “deck”, and some of the salesmen felt it was too fair, since sometimes the customers lost. They wanted Mel to modify the program so, at the setting of a sense switch on the console, they could change the odds and let the customer win.
Mel balked. He felt this was patently dishonest, which it was, and that it impinged on his personal integrity as a programmer, which it did, so he refused to do it. The Head Salesman talked to Mel, as did the Big Boss and, at the boss’s urging, a few Fellow Programmers. Mel finally gave in and wrote the code, but he got the test backwards, and, when the sense switch was turned on, the program would cheat, winning every time. Mel was delighted with this, claiming his subconscious was uncontrollably ethical, and adamantly refused to fix it.
After Mel had left the company for greener pa$ture$, the Big Boss asked me to look at the code and see if I could find the test and reverse it. Somewhat reluctantly, I agreed to look. Tracking Mel’s code was a real adventure.
I have often felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process.Â Â You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius.
Perhaps my greatest shock came when I found an innocent loop that had no test in it. No test.Â *None*. Common sense said it had to be a closed loop, where the program would circle, forever, endlessly. Program control passed right through it, however, and safely out the other side. It took me two weeks to figure it out.
The RPC-4000 computer had a really modern facility called an index register. It allowed the programmer to write a program loop that used an indexed instruction inside; each time through, the number in the index register was added to the address of that instruction,Â Â Â Â so it would refer to the next datum in a series. He had only to increment the index register each time through. Mel never used it.
Instead, he would pull the instruction into a machine register, add one to its address, and store it back. He would then execute the modified instruction right from the register. The loop was written so this additional execution time was taken into account — just as this instruction finished, the next one was right under the drum’s read head, ready to go. But the loop had no test in it.
The vital clue came when I noticed the index register bit, the bit that lay between the address and the operation code in the instruction word, was turned on — yet Mel never used the index register, leaving it zero all the time. When the light went on it nearly blinded me.
He had located the data he was working on near the top of memory — the largest locations the instructions could address — so, after the last datum was handled, incrementing the instruction address would make it overflow. The carry would add one to the operation code, changing it to the next one in the instruction set: a jump instruction. Sure enough, the next program instruction was in address location zero, and the program went happily on its way.
I haven’t kept in touch with Mel, so I don’t know if he ever gave in to the flood of change that has washed over programming techniques since those long-gone days. I like to think he didn’t. In any event, I was impressed enough that I quit looking for the offending test,Â telling the Big Boss I couldn’t find it. He didn’t seem surprised.
When I left the company, the blackjack program would still cheat if you turned on the right sense switch, and I think that’s how it should be. I didn’t feel comfortable hacking up the code of a Real Programmer.
Google Code Search is a very useful tool for making queries on open source software available on the Internet. Its useful when you need to know how to use a certain function or need an example how it can be used.
StuffIt is the most common file compression utility on Macs. Like similar applications, you can compress and uncompress files and folders. If your transferring a large number of files by FTP or e-mail, the best thing to do is compress them into one file for an easy upload.
There is a free version of StuffIt, but recent versions only contain the uncompression utility. It no longer included DropStuff (the compression application). So, in order to create .sit files, you need to purchase the commercial version of the program which isn’t that expensive.
What do you do if you don’t want to buy the application? OS X has a built-in ZIP compression and uncompression tools that very few Mac users seem to know about. I think it pays to read manuals, but most people don’t read these days and just start using their high tech toys out of the box.
Compressing a folder is very simple. All that you have to do is browse to the folder, Control-Click on it, and you’ll see a context menu. Select Compress "folder name" and it will create a .zip file. The contents of that folder will then be in one easy to manage file.
Be careful using 3rd party ZIP utilities. I’ve noticed on some that they don’t handle resource forks very well and save Adobe Type 1 fonts as zero byte files. Also, be careful uncompressing Mac archives on a PC. Even if Type 1 fonts are correctly stored in the archive, they will lose their resource forks in Windows and will save as zero byte files.
The .zip utilities built into Mac are ideal for transferring data to another Mac and they’re free. Since OS X is built on BSD, you also have the option of using gzip and tar compression, but you may lose resource forks using them so I would not recommend that approach. Stick with the built-in zip tools on the desktop.
Every PHP programmer wants their application to run on as many servers and web applications as possible. In order to do this, you need to write for PHP 4. Unfortunately, PHP 4 has been discontinued, PHP 5 has been here for 3 years and PHP 6 is right around the corner.
There weren’t very many web hosts supporting PHP 5 a few months ago. Things have been changing due to the effort of many PHP 5 evangelists. The new PHP is a better, more secure language. The many security issues of PHP 4 will not be available in PHP 6. Version 5 is meant to be a transition to convert code for the upcoming major release.
I’ve written WordPressXmlRpc for PHP 5. When I released it, there were a few people who wanted it run on PHP 4 so I converted it. The application is relatively simple so it wasn’t much of an effort. I’ve also written a PHP application for backing up databases and folders on my web hosting account. This application will be released in the coming weeks. Its written in PHP 5 and is very object oriented. Converting it to PHP 4 will be quite a task and not one I’m interested in doing. There are so many web hosts that support PHP 5 and it takes too many resources maintaining two versions of the same software.
All PHP applications should be written for version 5 and there is still plenty of time to convert older applications before version 6 is released.