Technogeeks
Programming is not a zero-sum game. Teaching something to a fellow programmer doesn't take it away from you. I'm happy to share what I can, because I'm in it for the love of programming.
Friday, October 16, 2015
10 Mac Excel Keyboard Shortcuts
Sunday, October 11, 2015
List of App Review Websites to generate reviews about your app.
For an App Developer it is really important to understand the value of generating word-of-mouth about your app. Reviews from Influencers is a major step behind many successful apps.
How to get visibility in front of such influencers ? A great way to get this started is getting it listed in various App Discovery Platforms. This will give visibility to your app and get in front of important app evangelist who can act as an influencer to make your app viral. If your idea is great, nothing could stop you after getting enough visibility for your idea.. Here is a consolidated list of various app review platforms..
iPhone App Review Websites
- Macworld
- Appolocicious
- 148 Apps
- Iphone Apps Review Online
- AppCraver
- Apps Patrol
- iPhone App Review
- FreshApps
- The Daily App Show
- iPhone Iusethis
- What’s On iPhone
- Apple Iphone School
- GIZMODO
- Ars technica
- Appletell
- AppDictions
- iPhoneAppReviews
- The iPhone App Review
- Itune App Reviews
- App Advice
- TopTenReviews
- iPhone Application List
- Panappticon
- iReview iPhone
- Crazy Mike’s Apps
- App Scout
- App boy
- iSmash Phone
- App Safari
- App Chatter
- MacTalk
- iSource
- App Shopper
- App Store Apps
- TiPb
- App Spy
- App Bite
- Best App Site
- App-reciation Reviews
- AppsFire
- Touch Reviews
- iPhone Quality index
- Best Applications
- Best Free Apps
- iPhone Life
- App Buddy
- iPhone Help
- Tapscape
Android App General Review Websites
- Android Tapp
- Android App Reviews Source
- Android App Storm
- Talk Android
- Android Central
- Androinica
- Best Android Apps Review
- Appolicious
- Android App Labs
- Appnoodle
- Androiod and Me
- 101 Best Android Apps
- Phandroid
- Android Police
- Android Guys
- AppBrain
- Cool Smart Phone
- Android App 101
- Android Spin
- Droid Dog
- Talk Android
- Android App Log
- Droid Life
- Android Apps 360
- Android Apps Review
- Android Apps
- Get Android Stuff
- Free Apps
Youtube Android App Reviewers
This list could help you to start individually reaching out to some of the top app reviewers for iPhone and Android. There are different grade of review websites here. If it is hard for your to reach out the top ones initially, try to get it in lower ones and later you can use it to get reviewed by top ones. Preparing interesting content about your app is also important in this step.
I will keep this list updated so that it will act as a resource for any startup to find such discovery platforms.!
Wednesday, August 01, 2012
Comet - A Server Side Push Ajax - Explained
If you are assuming it as an Ajax call in regular interval of time, you are wrong. As you assume, a common method of doing such notifications is to poll a script on the server (using ajax) on a given interval (perhaps every few seconds), to check if something has happened. However, this can be pretty network intensive, and not the right way to handle such real time data requirements.
How facebook handles it is pretty interesting.It's using a technique called Comet, which is very much similar to Ajax in that it’s asynchronous, but applications that implement the Comet style can communicate state changes with almost negligible latency.

Comet (Wikipedia article)Comet is the lesser known of the real-time techniques and can be thought of as the reverse of Ajax (well, actually Comet often uses Ajax, which I'll explain later, but bear with me for now). An event happens that is known to the server and the server notifies the browser, updating the web page that the user is viewing.
The important difference between Ajax and Comet is where the action originates. With Ajax, the action is taken by the user and with Comet, it's an action from the server. Currently Comet is a popular technique for browser-based chat applications since it allows the server to receive a message from one user and display that message to another user. Some web applications that use Comet are Google's GTalk, Meebo's chat, and Facebook chat.
From a technical perspective, Ajax and Comet differ in the expected length of the request. Ajax uses a quick request-response to update or get new information from a server, while Comet typically opens a longer connection to get information as it is available from the server.
So how does Comet actually work?
There are several ways to implement Comet but the most common is called "long-polling". Long-polling means that the browser opens an XMLHttpRequest to the server but instead of expecting a quick response (like Ajax does) the connection just waits. If the connection gets a response from the server, it returns that to the browser right away. Otherwise after a bit of time the connection dies and the browser sends another request and waits again.
How is long-polling different from just periodically checking with Ajax?
Lets take an example.
Consider you have a Grid in your web application. This grid displays dynamic data (say stock quote or portfolio details). You would like to display this ever-changing data in a grid at the client's browser. However, you do not want the page to get refreshed every second. Also, you aren't interested in time-polling (every second or two) using AJAX as this will stall your sever with increase in number of clients. Your interest is to display this dynamic content, but update the client only when it is necessary (i.e., when there is a change in the grid data) avoiding unnecessary communication between client and server. The answer to the situation isto use COMET in your web application.
- Unnecessary calls are made to the server, even though there may be no update.
- With number of clients increasing, server may get overloaded exponentially, i.e., number of clients is directly proportional to server load. This stalls the server with a few hundred clients.
Also, we need to update the clients in an asynchronous pattern without blocking previous requests. This principle is applied in this COMET based system.Following are the advantages of using comet.
- Push the data to the clients when the server side data changes, avoiding unnecessary round-trips to clients. The results is: increase in performance of network, web server, and your application.
- Asynchronous handling of clients avoids blocking threads thereby increasing the application's performance significantly.
If you really want to make a try on how it works, you can use the Chat Demo in the following article.
http://www.zeitoun.net/articles/comet_and_php/start
It's really simple and could surely surprise you with its quick responsiveness.
Hope this blog would get you some basic idea about comet techique.
Not every web application is going to need it or even profit from it and that's why it is less popular. But whenever you come across a specific requirement which needs very high responsiveness for collaborative multi-user applications, let it strike our mind, there is a better way to implement it than using an Ajax script to poll in every second!!
Saturday, June 25, 2011
Useful IDEs to Test Your PHP Code Online
Sometimes you don’t have an Apache server on your PC and you don’t want to go through the hassle of uploading your PHP script to your web hosting space just to test if your tweaks inside the code work. Sometimes you just need to paste the code inside the IDE (Integrated Development Environment) and see the result, including potential errors and suggestions to fix them. Also, the ability to work with languages other than PHP is a plus. It was very useful for me as my system doesn't have apache installed. Hope this would surely help you at some emergency situations.. :)
Full featured programming IDE. Supports a C#/.NET, PHP, Javascript, HTML and CSS. You can work with projects and multiple files simultaniuously. You can also upload your existing project and edit it online. You are also given a link to share your projects online.
2. Codepad
Plain and simple. Supports about 13 programming languages including C/C++, Perl, Ruby and more. After pasting a code the user is given an unique URL that can be shared with other people, so they can see the result too. You can make the code private if you are concerned about privacy. No registration required.
3.IDEOne.com
The functionality is close to Codepad. Supports more than 40 programming . There is an optional registration if you want to manage your submitted scripts.
Tip: If you want to test the working of regular expressions, you can use this cool online tool for that.
Friday, June 24, 2011
Gumblar Virus – How To Avoid Getting Hacked
Gumblar Virus
2010 is The Year Of The Gumblar. You might not know the name but I’m sure you’ve experienced it either directly (hopefully not) or indirectly. Have you ever been surfing and come across a page with a big red sign warning against you entering the site? If you have then it’s a good chance that site was hit with Gumblar or one its variants like Nine Ball, Martuz or a host of other weird and wonderful names. If you run a successful online business can you imagine the damage such an attack could do? I actually got hit with it on our development server that host around 100 projects. But when I thought of the damage it would have done if it had hit Host……it certainly got my attention.
So what is Gumblar and how does it work? These are things EVERY webmaster MUST know! The original Gumblar used a vulnerability in Adobe Acrobat and Flash player but subsequent variants use other exploitable software but all have the same end result. I won’t go into the technicalities of how your computer gets infected but you need to know what it does. Once infected it listens in on any FTP connections and steals the connection information. Usually within minutes the virus uses your FTP account to modify files and insert some nasty code. This code is normally an iframe, javascript or some other code that triggers a malware download from another computer.
The virus will sometimes modify PHP code and insert phpshell scripts which in turn attempt to install the malware that other infected sites connect to to trigger malware downloads to unsuspecting site visitors. This is a three-pronged nightmare that just grows exponentially. From local computer to FTP account to server infection and the wheel keeps on turning. So what’s the defence?
The virus three-pronged and therefore everyone needs to cover as many of these vulnerabilities as posible.
1) Your Computer – a decent “On-Access” anti-virus program is all you need. When I got infected I was running a cheap AV program that wasn’t On-Access. This simply means the AV program automatically scans anything that is downloaded to your computer or any file that you open on your computer. If your anti-virus just gives you a daily scan you are NOT protected. You could get infected, download some nasty stuff to your computer and proliferate the virus before you even get to your daily scan.
2) FTP over SSL. If you are on a linux server simply choose a connection option in your FTP program that is encrypted or just says “SSL”. All of our shared servers should have this working. If you find it doesn’t please contact Support and we will fix it! With this option your connection info is sent encrypted and not in plain text and the virus cannot sniff it out. We would love to implement this by default (forcing people to use it) but even though we could post about it in a newsletter, on a mail list, on our blog and on our forum we will still get hundreds of tickets asking via their FTP doesn’t work. As awareness grows maybe we will implement slowly.
If you have a dedicated server and would like FTP over SSL activated please contact Support.
Bad news for Windows clients on this front. Our Windows servers don’t currently support FTP over SSL as this is a feature included in the newer Windows 2008 OS with IIS7. It’s a huge change and one that we aren’t quite ready for. But you can still install a decent Anti-Virus program.
3) Server Infection – this is one area where Windows servers aren’t as vulnerable. The virus uses PHP which needs to be running as a global user such as Apache. PHP on Windows has run under a user’s FTP username as CGI for ages so even if files get infected the virus cannot break out of the user’s home directory. On linux though PHP has ran as Apache for aeons and it’s only with later versions of Plesk that we now have the option to run PHP as CGI or FastCGI. So if you’re on Plesk 9 I encourage you to switch PHP to a Fast CGI application under Web Host Settings for the domain. Some scripts can break with it so if you are not sure please don’t hesitate to contact support and we will advise you. Scripts tend to run faster under Fast CGI too so you are in fact doing yourself a service.
This year we’ve been dealing with Gumblar related issues almost on a weekly basis. It is very hard to convince someone that the server hasn’t been hacked when their website is showing the Reported Attack Site page. In these cases the issue almost always lies with the user’s computer being infected.
But we have also had cases where the virus has spread through Apache-owned PHP files causing malicious downloads and random page redirects to search results containing a list of infected sites. We can always track down the source but it is very frustrating for us as hosts and our users. In this case a solution would be force every domain using PHP to run as Fast CGI but as with the FTP solution there would be even more fallout. So it’s a balancing tightrope act with a bit of a dodgy safety net. All we can do as hosts is raise our own community’s awareness of this problem that doesn’t seem to be going away any time soon and hope that in the future we can implement more stricter safeguards against this menaceAnswering the most common CSS Question "Why does my site look different in IE than in Firefox"
I am trying to explain the fundamental reason why your site may look slightly different in various browsers. Please check this and let me know if this help you.
Margins and Padding
One of the main causes for the many positional differences between layouts in various browsers is due to the default stylesheet each browser applies to give styling to certain elements. This usually involves setting default margins and padding to some elements to make them behave in a certain way.
For instance, paragraph (p) tags will have a margin applied to them so that each paragraph is separated by vertical white space and do not run into each other. The same applies to many other tags including heading tags (h1 etc). The problem occurs because the amount of margin (or padding) applied to these elements is not consistent across browsers. On many occasions Mozilla/Firefox will add a top margin to the element as well as a bottom margin. IE will however only add a bottom margin. If you were then to view these two browsers side by side you would see that the alignment would be different due to the top margin applied by Mozilla which could make your design not line up as expected.
In some designs this may not be a problem but in cases where position is important, such as aligning with other elements on the page, then the design may look bad or at least not as expected.
Here are some styles taken from the default Firefox 2.0 stylesheet (html.css) and immediately shows what is going on here:
PLAIN TEXT
CSS:
1. body {
2. display: block;
3. margin: 8px;
4. }
5.
6. p, dl {
7. display: block;
8. margin: 1em 0;
9. }
10. h1 {
11. display: block;
12. font-size: 2em;
13. font-weight: bold;
14. margin: .67em 0;
15. }
16.
17. h2 {
18. display: block;
19. font-size: 1.5em;
20. font-weight: bold;
21. margin: .83em 0;
22. }
23.
24. h3 {
25. display: block;
26. font-size: 1.17em;
27. font-weight: bold;
28. margin: 1em 0;
29. }
30.
31. h4 {
32. display: block;
33. font-weight: bold;
34. margin: 1.33em 0;
35. }
36.
37. h5 {
38. display: block;
39. font-size: 0.83em;
40. font-weight: bold;
41. margin: 1.67em 0;
42. }
43.
44. h6 {
45. display: block;
46. font-size: 0.67em;
47. font-weight: bold;
48. margin: 2.33em 0;
49. }
As you can clearly see there are various properties that have been set but the most important are the margins and padding as they vary considerably. If you were to look at the default IE stylesheet you would find that there would indeed be few styles that were the same as the above.
What Can Be Done
Since we can never be sure whether the browser's stylesheet has applied margin or padding to an element the only real option is to explicitly set the margins and padding ourselves. This way we can over-ride the default stylesheet so that we know exactly how each element will behave in each browser.
As we don't really know what elements have default styling applied to them (across all browsers) we must set the margin and padding for every element we use. In most cases we are just talking about block level elements -- you do not need to do this for inline elements such as em, strong, a, etc which seldom have any margin or padding applied to them. Although em and strong will have some styling already applied to them to give them their strong and emphasized look.
Here is how you can reset the padding and margin of block elements when you use them:
PLAIN TEXT
CSS:
1. html,body{margin:0;padding:0}
2. p {margin:0 0 1em 0;padding:0}
3. h1{margin:0 0 .7em 0;padding:0}
4. form {margin:0;padding:0}
Take the body element for example, and notice that we have included the html element also, and then we have re-set padding and margins to zero. As explained above, various browsers will apply different amounts of margin to the body to give the default gap around the page. It is important to note that Opera does not use margins for the default body spacing but uses padding instead. Therefore we must always reset padding and margins to be 100% sure we are starting on an even footing.
If you did not reset the margins or padding and you simply defined something like this:
PLAIN TEXT
CSS:
1. body{margin:1em}
Then in Opera you would now have the default padding on the body plus the extra margin you just defined there by doubling the initial space around the body in error.
Also be wary of doing things like this:
PLAIN TEXT
CSS:
1. html,body {margin:0;padding:1em}
You have now defined 1em padding on the html element and 1em padding on the body element giving you 2em padding overall which probably was not what you intended.
Global White Space Reset
These days it is common to use the global reset technique which uses the universal selector (*) to reset all the padding and margins to zero in one fell swoop and save a lot of messing around with individual styles.
e.g.
PLAIN TEXT
CSS:
1. * {margin:0;padding:0}
The universal selector (the asterisk *) matches any element at all and to turn all elements blue we could do something like this:
PLAIN TEXT
CSS:
1. * {color:blue}
(Of course they would only be blue as long as they have not been over-ridden by more specific styles later on in the stylesheet.)
The global reset is a neat little trick that saves you having to remember to reset every element you use and you can be sure that all browsers are now starting on even footing.
Lists need a special mention here as it is not often understood that the default space or the bullet in lists is simply provided via the default stylesheet in the provision of some left margin. Usually about 16px left margin is added by default to the UL to allow the bullet image to show; otherwise there is nowhere for it to go. As with the problems already mentioned we also need to cater for some browsers that don't use left margin but use left padding instead.
This can be quite a big issue if, for instance, you have not reset the default padding and margin to zero and try something like this.
PLAIN TEXT
CSS:
1. ul {padding:1em}
In browsers that have a default margin applied you will now get the default left margin of 16px (approx) and a default padding of 1em, giving you approximately twice the amount of space on the left side of the list. This would, of course, make the design look quite different in the various browsers and not something you would wish to do.
In essence the margin should have been reset to zero, either initially with the global reset, or by simply doing the following:
PLAIN TEXT
CSS:
1. ul {margin:0;padding:1em}
Now all browsers will display the same, but you will need to ensure that the 1em is still enough room for the bullet to show. I usually allow 16px left margin (or padding) as a rough guide and that seems to work well. (You can use either padding or margin for the default bullet space.)
Drawbacks
However, as with all things that make life easier there is a price to be paid.
First of all, certain form elements are affected by this global reset and do not behave as per their normal defaults. The input button in Mozilla will lose its "depressed when clicked effect" and will not show any action when clicked other than submitting the form, of course. IE and Opera do not suffer from this problem and it is not really a major issue but any loss of visual clues can be a detriment to accessibility.
You may think that you can simply re-instate the margin and padding to regain the depressed effect in Mozilla but alas this is not so. Once you have removed the padding then that changes the elements behavior and it cannot be restored even by adding more padding.
There is also an issue with select/option drop down lists in Mozilla and Opera. You will find that using the global reset will take away the right padding/margin on the drop down list items and that they will be flush against the drop down arrow and look a little squashed. Again, we have problems in re-instating this padding/margin in a cross browser way.
You can't add padding to the select element because Mozilla will add the padding all around which includes the little drop down arrow that now suddenly becomes detached from its position and has a big white gap around it. You can, however, add padding right to the option element instead to give you some space and this looks fine in Mozilla but unfortunately doesn't work in Opera. Opera in fact needs the padding on the select element which as we already found out messes up Mozilla.
Here is an image showing the problems in Firefox and Opera:
Select element in Firefox and Opera
There is no easy fix -- it's something you have to live with if you use the global reset method.
If you do not have any forms in your site (unlikely) then you don't have to worry about these issues or you can simply choose to ignore them if you think your forms are still accessible and don't look too bad. This will vary depending on the complexity of your form design and is something you will need to design for yourself. If you are careful with the amount of padding you add then you can get away with a passable design that doesn't look too bad cross-browser.
Another perceived drawback, of which there has been a lot of discussion, is whether the global reset method could have speed implications on the browsers rendering of the page. As the universal selector applies to every single element on the page, including elements that don't really need it, it has been put forward that this could slow the browser down in cases where the html is very long and there are many nodes for the parser to travel.
While I agree with this logic and accept that this may be true I have yet to encounter an occasion where this has been an issue. Even if it were an issue I doubt very much that in the normal scheme of things it would even be noticeable but of course is still something to be aware of and to look out for.
The final drawback of using the global reset method is that it is like taking a hammer to your layout when a screwdriver would have been better. As I have noted above there is no need to reset things like em, b , i, a, strong etc anyway and perhaps it's just as easy to set the margins and padding as you go.
As an example of what I mean take this code.
PLAIN TEXT
CSS:
1. * {margin:0;padding:0}
2. p,ol,ul,h1,h2,h3,h4,h5,h5,h6 {margin:0 0 1em 0}
I have negated the padding on all elements and then given a few defaults for the most popular elements that I am going to use. However, when coding the page, I get to the content section and decide I need some different margins so I define the following:
PLAIN TEXT
CSS:
1. #content p {margin-top:.5em}
So now I have a situation where I have addressed that element three times already. If I hadn't used the global reset or the default common styling as shown above then I could simply have said:
#content p {margin:.5em 0 1em 0;padding:0}
This way I have addressed the element only once and avoided all issues related to the global reset method. It is likely that you will apply alternate styling to all the elements that you use on the page anyway and therefore you simply need to remember to define the padding and margin as you go.
PLAIN TEXT
CSS:
1. form{width:300px;margin:0;padding;0}
2. h1{color:red;background:white;margin:1em; padding:2px;}
Conclusion
The safest method is simply to define the margins and padding as you go because nine times out of ten you will be changing something on these elements and more than likely, it will involve the padding and margins. This saves duplication and also solves all the issue that the global reset may have.
The global reset is useful for beginners who don't understand that they need to control everything or who simply forget that elements like forms have a great big margin in IE but none in other browsers.
In the end it's a matter of choice and of consistency. Whatever method you use make sure you are consistent and logical and you won't go wrong. It is up to the designer to take control of the page and explicitly control every element that is used. Do not let the browser's defaults get in your way and be aware that elements can have different amounts of padding and margin as determined by the browser's own default stylesheet. It is your job to control this explicitly
Further Reading:
http://meyerweb.com/eric/thoughts/2004/09/15/emreallyem-undoing-htm...
http://meyerweb.com/eric/articles/webrev/200006a.html
http://tantek.com/log/2004/09.html#d06t2354
http://www.456bereastreet.com/archive/200410/global_white_space_reset/
5 Tips for creating good code every day; or how to become a good software developer
10 quick tips to help make your CSS coding as pain-free as possible.
1. Keep it Simple
This may sound obvious but if you find yourself using complicated coding to achieve your design then you should think again about whether the feature you need is really necessary or if you're just thinking about your design and not your visitors. Too often designers get caught up in their own design and go to great lengths to produce a certain visual effect only to find later on that visitors find it either irritating or unusable.
Complex code is usually the result of muddled thinking. Plan your layout logically and work from the outside in and from the top down where possible. Look at what containers you will need and break jobs down into smaller parcels. I usually start with a page wrapper and then progress logically through the header, navigation, main content and footers etc trying to preserve the flow of the document as much as possible.
While good visual design is necessary to attract visitors you must still have good content and a usable and accessible site. If you find your html and css looks like spaghetti then have a re-think and see if you can simplify it. This will make it easier to maintain in the future and will often save code and bandwidth.
2. Don't use hacks unless its a known and documented bug
This is an important point as I too often see hacks employed to fix things that aren't really broken in the first place. If you find that you are looking for a hack to fix a certain issue in your design then first do some research (Google is your friend here) and try to identify the issue you are having problems with.
If you find its a known bug then 99% of the time there will be a known solution to this bug and you can safely use a hack if required knowing that you are fixing a bug and not just correcting bad coding.
I couldn't count the number of times I've seen layouts using hacks when all that was needed was to control the default margins on the page (see next tip).
3. Take care of margins and padding on all elements that you use
All browsers apply default padding and margins to most elements and the amount they apply varies quite substantially. Therefore you need to explicitly control the padding and margins on all the elements you use.
This is covered in depth at my previous blog post http://deepusnath.ning.com/profiles/blogs/answering-the-most-common...
4. Avoid using too much absolute positioning
Most novices to CSS quickly latch on to absolute positioning because it is pretty straight-forward and does what it says on the box. However absolute layouts have a number of problems and the biggest problem of all is that absolute elements are removed from the flow.
This means that when you absolutely place an element then it has total disregard to whatever else is on your page. It will overlap whatever was in that position and will take no notice of other content at all. The result of too much absolute positioning is that you end up having to control everything with absolute positioning and makes for a very rigid and inflexible layout.
The most common problem encountered when using absolute positioning for two or three columns is "How to put a footer at the bottom of all three columns?" The answer is you can't, unless you resort to scripting or use a fixed height for all three columns.
Instead you should look into using mostly static positioning, margins and floats to maintain the flow of the layout. Static positioning is the default and basically means no positioning at all and the elements just take up space in the normal flow of the document. If elements flow normally then they have a logical construction and one element follows another without having to position it at all. You can use margins to nudge elements into position or use floats when you want elements aligned horizontally.
5. Avoid "divitus"
Although "divitus" isn't a real word it is now commonly used to refer to layouts that have too many divs and not enough semantic html. Semantic html means using the correct html element for the task in hand and not just using divs for everything. Divs are generic dividers of page content and nothing else. 99% of the time there will be an html tag perfect for the job in hand.
e.g. p,h1,h2,h3,h4,h5,h6,ul,ol,dl etc...
Use divs to divide the page into logical sections or when there is no better alternative. If your page is logically divided into sections that use id's to identify each section then this will allow you to target inner elements in that section without having to over-use classes on each element
e.g. #top-section h1 {color:red}(see next tip on "classitus").
A common misuse of divs can be found in the following example:
PLAIN TEXT
HTML:
A lot of times the above code can simply be reduced to this:
PLAIN TEXT
HTML:
Heading
Sub Heading
This is the content
As you can see, by using the correct html to describe the content you give your layout inherent structure and meaning without any extra effort.
6. Avoid "Classitus"
"Classitus" is another made up word similar to "divitus" (as explained above) and refers to the over-use of classes (or id's) when in fact none are necessary. If your page is logically divided then you can target many specific elements without the need for millions of classes.
A common example of misuse of classes is shown below:
PLAIN TEXT
CSS:
a.link{color:red;text-decoration:none}
PLAIN TEXT
HTML:
All the links have been given a class of .link in order to style them and is completely unnecessary. If we apply an ID or class to the UL instead, we can target all the anchors within that ul without having to add any extra classes at all.
PLAIN TEXT
CSS:
#nav a {color:red;text-decoration:none}
PLAIN TEXT
HTML:
As you can see we get the same effect and save considerably on mark-up and readability. A lot of times the ul may be unique in a section anyway and you can use the parent id without even having to add an id to the ul. (Remember that id's are unique and can only be used once per page.)
7. Validate your code
Visit the validator at every opportunity and validate your css and html especially when learning something new. If you are new to html/css then validate regularly during development so that you can be sure the code you are using is correct; that will allow you to concentrate on getting the design right.
Do not wait until you have finished coding the design as you may be using features that are not appropriate and will result in a lot more work than necessary. Validating frequently will also catch simple errors like typos which will always creep into the code when you are not looking.
8. Rationalize your code
At every stage during development ask yourself whether you need that extra div wrapper or not. Can existing elements be utilised for background images without adding extraneous code?
Thinking ahead and planning your layout beforehand will often lead to more concise code and an easier-to-manage layout.
9. Flexibility
Remember that a web page isn't the same as a printed page and that ultimately the user has more control over how your page will appear than you do. With this in mind try to allow for some flexibility in your design so that things like text resizing issues don't break your layout. Don't make everything a fixed height/width or at least use ems to allow the layout to expand when text is resized.
With a little thought and patience you can still make your page look good and satisfy accessibility requirements.
10. Browser support
A designer's lot is often not a happy one due to the variance in the display offered via various browsers. There is no easy answer to this question (apart from the tips already given) and my method of working is as follows.
First of all decide with your client (or yourself) what browsers you are aiming to support. This will of course be based on many factors (which we won't go into here) but could be as simple as checking your server stats to see who your visitors are.
Once you have decided what browsers to support then make sure that you have access to these browsers in some way. The easiest way is to download the browser you need so you can test locally.
If you can't download the browsers for one reason or another or you need to test on another platform, then there are a number of sites that will offer remote access or screenshots. Some of these require payment and some of the simpler ones are free (a quick look on Google will soon sort you out).
Once you have decided what browsers to support it is time to start coding then you must check your design at every stage in the browsers that you want to support. This means writing a line of code then firing up at least 3 or 4 browsers to check. As you get more experienced you will soon learn what is likely to work and what doesn't and you can check less frequently.
If you take this approach of checking at every stage then you will soon find out what works and what doesn't and identify problems straight away and determine the cause is immediately. This would not be the case if you waited until you had finished and then checked the design. It could take hours (or days) to identify where the problem is and what is causing it. It may in fact be too late to fix it because you have built the whole page on a function that only works in one browser and you would have to start again from scratch.
By checking as you go you eliminate this problem and quite often a small change in design at each stage will accommodate nearly all the browsers you need to support without needing to hack. You can't make these small tweaks and changes in design if you wait until the end.
The above tips for css coding aren't in any special order and most are just plain common sense. If you follow the advice given you will make your web design life a lot easier and less stressful.
Common PHP Security Vulnerabilities
Search Google on the topic “php security”, you will come across a great article in Security Tips. I would like to share the valuable tips inspired from that article. This post discusses about the most common security vulnerabilities along with some standard best practices in php coding.
PHP is the most popular web programming languages in use today due in large part to the fact that it is a highly flexible syntax that can perform many functions while working flawlessly in conjunction with HTML. It is relatively easy to learn for beginners and is also powerful enough for advanced users. It works exceptionally well with open source tools, such as the Apache web server and MySQL database. In other words, its versatility is unsurpassed when compared to other scripting languages, making it the language of choice for many programmers.
There are various types of attacks that PHP is particularly vulnerable to. The two main types of attacks are human attacks and automated attacks, both of which can potentially devastate a website. The goal of PHP security is to minimize, and ultimately eliminate, the potential for both human and automated attacks by putting into place strategic lines of defense to eliminate access to your site by unverified users. The way you go about doing this is to target the most common types of PHP security breaches first, so that you can guard your website against malicious attacks. So what are the most common types of PHP security breaches?
Most Common PHP Security Vulnerabilities
1. Register_Globals
Register_Globals makes writing PHP applications simple and convenient for the developer, but it also poses a potential security risk. This setting is located in PHP’s configuration file, which is php.ini, and it can be either turned on or off. When turned on, it allows unverified users to inject variables into an application to gain administrative access to your website. Most, if not all, PHP security experts recommend turning register_globals off.
So instead of relying on register_globals, you should instead go through PHP Predefined Variables, such as $_REQUEST. To further tighten security, you should also specify by using: $_ENV, $_GET, $_POST, $_COOKIE, or $_SERVER instead of using the more general $_REQUEST.
2. Error Reporting
Error reporting is a great tool for diagnosing bugs. It allows you to fix bugs quicker and easier, but also poses a potential security threat. The problem occurs when the error is visible to others on-screen, because it reveals possible security holes in your source code that a hacker can easily take advantage of. If display_errors is not turned off, or has a value of “0?, the output will appear on the end user’s browser – Not good for security! If you want to set log_errors to on, then indicate the exact location of the log with error_log.
3. Cross-site Scripting (XSS)
Cross-site scripting, or XSS, is a way for hackers to gather your website’s user data by using malicious markup or JavaScript code to trick a user, or their browser, to follow a bad link or present their login details to a fake login screen, which, instead of logging them in, steals their personal information. The best way to defend against XSS is to disable JavaScript and images while surfing the web, but we all know that’s nearly impossible with so many websites using JavaScript’s rich application environment these days.
Useful for protecting against XSS is a useful PHP function called htmlentities(). This simple function works by converting all characters in html to their corresponding entities, such as “<” would convert to “<” (without the quotes).
4. Remote File Inclusion (RFI)
This type of attack is relatively unknown amongst developers, which makes it an especially damaging threat to PHP security. Remote file inclusion, or RFI, involves an attack from a remote location that exploits a vulnerable PHP application and injects malicious code for the purpose of spamming or even gaining access to the root folder of the server. An unverified user gaining access to any server can wreak major havoc on a website in many different ways, including abusing personal information stored in databases.
The best way to secure your site from RFI attacks is through php.ini directives – Specifically, the allow_url_fopen and the allow_url_include directives. The allow_url_fopen directive is set to on by default, and the allow_url_include is set to off. These two simple directives will adequately protect your site from RFI attacks.
Other PHP Security Tools
This useful tool reports security information in the PHP environment, and best of all, it offers suggestions for improving the errors. It is available for download under the “New BSD” license, and the PhpSecInfo project is always looking for more PHP developers to help improve this tool.
This is a tool used to scan PHP code for vulnerabilities, and it can be used to scan any directory. PHP Security Scanner features an useful UI for better visualization of potential problems, and it supports basic wild card search functionality for filtering directories or files that are to be searched.
- Spike PHP Security Audit Tool
The Spike PHP Security Audit Tool is an open source solution for doing static analysis of PHP code. It will search for security exploits, so you can correct them during the development process.
Here, we have given some basic coding standard for setting up database configuration. This is very simple one without implementation of any framework. We have given it to explain how can we convert a normal code to a standard code.
$mysql = mysql_connect(‘localhost’, ‘test’, ‘test’);
mysql_select_db(‘sample’) or die(“cannot select DB”);
Trying a DRY approach
$db_host = ‘localhost’;
$db_user = ‘test’;
$db_password = ‘test’;
$db_database = ‘bwired’;
$mysql = mysql_connect($db_host, $db_user, $db_password);
mysql_select_db($db_database);
As the values normally don’t change, we can use constants
define(‘DB_HOST’, ‘localhost’);
define(‘DB_USER’, ‘test’);
define(‘DB_PASSWORD’, ‘test’);
define(‘DB_DATABASE’, ‘sample’);
$mysql = mysql_connect(DB_HOST, DB_USER, DB_PASSWORD);
mysql_select_db(DB_DATABASE);
After years of changing the values every time, you upload something to the live server
define(‘LIVE_ENV’, true);
if(LIVE_ENV) {
define(‘DB_HOST’, ‘localhost’);
define(‘DB_USER’, ‘test’);
define(‘DB_PASSWORD’, ‘test’);
define(‘DB_DATABASE’, ‘bwired’);
} else {
define(‘DB_HOST’, ‘testserver.com’);
define(‘DB_USER’, ‘testuser’);
define(‘DB_PASSWORD’, ‘test’);
define(‘DB_DATABASE’, ‘sample’);
}
$mysql = mysql_connect(DB_HOST, DB_USER, DB_PASSWORD);
mysql_select_db(DB_DATABASE);
Even better would be this
if ($_SERVER["HTTP_HOST"] == ‘www.domain.com’) // remote live environment
{ … }
else // localhost test environment
{ … }
PHP5 procedural approach using the new mysql extension
$link = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_DATABASE);
if (!$link) {
printf(“Connect failed: %s\n”, mysqli_connect_error());
exit();
}
printf(“Host information: %s\n”, mysqli_get_host_info($link));
mysqli_close($link);
How to Send Email from a PHP Script Using SMTP Authentication
PHP mail() and SMTP Authentication
Part of what makes the PHP mail() function is so simple is its lack of flexibility. Most importantly and frustratingly, the stock mail() does not usually allow you to use the SMTP server of your choice, and it does not support SMTP authentication, required by many a mail server today, at all.
Fortunately, overcoming PHP's built-in shortcomings need not be difficult, complicated or painful either. For most email uses, the free PEAR Mail package offers all the power and flexibility needed, and it authenticates with your desired outgoing mail server, too. For enhanced security, secure SSL connections are supported.
Send Email from a PHP Script Using SMTP Authentication
To connect to an outgoing SMTP server from a PHP script using SMTP authentication and send an email:
- Make sure the PEAR Package is installed.
- Typically, in particular with PHP 4 or later, this will have already been done for you. Just give it a try.
- Adapt the example below for your needs. Make sure you change the following variables at least:
- from: the email address from which you want the message to be sent.
- to: the recipient's email address and name.
- host: your outgoing SMTP server name.
- username: the SMTP user name (typically the same as the user name used to retrieve mail).
- password: the password for SMTP authentication.
Sending Mail from PHP Using SMTP Authentication - Example
";
$to = "Ramona Recipient";
$subject = "Hi!";
$body = "Hi,\n\nHow are you?";
$host = "mail.example.com";
$username = "smtp_username";
$password = "smtp_password";
$headers = array ('From' => $from,
'To' => $to,
'Subject' => $subject);
$smtp = Mail::factory('smtp',
array ('host' => $host,
'auth' => true,
'username' => $username,
'password' => $password));
$mail = $smtp->send($to, $headers, $body);
if (PEAR::isError($mail)) {
echo("" . $mail->getMessage() . "
");
} else {
echo("Message successfully sent!
");
}
?>
Sending Mail from PHP Using SMTP Authentication and SSL Encryption - Example
";
$to = "Ramona Recipient";
$subject = "Hi!";
$body = "Hi,\n\nHow are you?";
$host = "ssl://mail.example.com";
$port = "465";
$username = "smtp_username";
$password = "smtp_password";
$headers = array ('From' => $from,
'To' => $to,
'Subject' => $subject);
$smtp = Mail::factory('smtp',
array ('host' => $host,
'port' => $port,
'auth' => true,
'username' => $username,
'password' => $password));
$mail = $smtp->send($to, $headers, $body);
if (PEAR::isError($mail)) {
echo("" . $mail->getMessage() . "
");
} else {
echo("Message successfully sent!
");
}
?>
Best Practices for Speeding Up Your Web Site
80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.
One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.
Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.
CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image
and background-position
properties to display the desired image segment.
Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it's not recommended.
Inline images use the data:
URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.
Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer's blog postBrowser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.
Use a Content Delivery Network
The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?
As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.
Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.
A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.
Some large Internet companies own their own CDN, but it's cost-effective to use a CDN service provider, such as Akamai Technologies, EdgeCast, or level3. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN (both 3rd party as mentioned above as well as Yahoo’s own CDN) improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.
Add an Expires or a Cache-Control Header
There are two aspects to this rule:
- For static components: implement "Never expire" policy by setting far future
Expires
header - For dynamic components: use an appropriate
Cache-Control
header to help the browser with conditional requests
Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used onall components including scripts, stylesheets, and Flash components.
Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.
Expires: Thu, 15 Apr 2010 20:00:00 GMT
If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.
ExpiresDefault "access plus 10 years"
Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename, for example, yahoo_2.0.6.js.
Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser's cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A "primed cache" already contains all of the components in the page.) The number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user's Internet connection.
Gzip Components
tag: server
The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.
Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.
Accept-Encoding: gzip, deflate
If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.
Content-Encoding: gzip
Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.
Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.
There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.
Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.
Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.
Put Stylesheets at the Top
tag: css
While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.
Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.
The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.
The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: "Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times." Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.
Put Scripts at the Bottom
The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specificationsuggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.
In some situations it's not easy to move scripts to the bottom. If, for example, the script usesdocument.write
to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.
An alternative suggestion that often comes up is to use deferred scripts. The DEFER
attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER
attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.
Avoid CSS Expressions
CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:
background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );
As shown here, the expression
method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression
method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.
The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.
One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.
Make JavaScript and CSS External
tag: javascript, css
Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?
Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.
The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.
Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!'s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.
For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser's cache.
Reduce DNS Lookups
tag: content
The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.
DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.
Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout
registry setting. Firefox caches DNS lookups for 1 minute, controlled by thenetwork.dnsCacheExpiration
configuration setting. (Fasterfox changes this to 1 hour.)
When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.
Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.
Minify JavaScript and CSS
tag: javascript, css
Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code areJSMin and YUI Compressor. The YUI compressor can also minify CSS.
Obfuscation is an alternative optimization that can be applied to source code. It's more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.
In addition to minifying external scripts and styles, inlined and
blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.
Avoid Redirects
tag: content
Redirects are accomplished using the 301 and 302 status codes. Here's an example of the HTTP headers in a 301 response:
HTTP/1.1 301 Moved Permanently Location: http://example.com/newuri
Content-Type: text/html
The browser automatically takes the user to the URL specified in the Location
field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such asExpires
or Cache-Control
, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.
The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.
One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect tohttp://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by usingAlias
or mod_rewrite
, or the DirectorySlash
directive if you're using Apache handlers.
Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias
and mod_rewrite
if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias
or mod_rewrite
.
Remove Duplicate Scripts
tag: javascript
It hurts performance to include the same JavaScript file twice in one page. This isn't as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.
Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.
In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.
One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.
An alternative in PHP would be to create a function called
insertScript
.
In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.
Configure ETags
tag: server
Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the
ETag
response header.HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
ETag: "10c24bc-4ab-457e1c1f"
Content-Length: 12195Later, if the browser has to validate a component, it uses the
If-None-Match
header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.GET /i/yahoo.gif HTTP/1.1 Host: us.yimg.com
If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
If-None-Match: "10c24bc-4ab-457e1c1f"
HTTP/1.1 304 Not ModifiedThe problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.
The ETag format for Apache 1.3 and 2.x is
inode-size-timestamp
. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is
Filetimestamp:ChangeNumber
. AChangeNumber
is a counter used to track configuration changes to IIS. It's unlikely that theChangeNumber
is the same across all IIS servers behind a web site.The end result is ETags generated by Apache and IIS for the exact same component won't match from one server to another. If the ETags don't match, the user doesn't receive the small, fast 304 response that ETags were designed for; instead, they'll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently. Even if your components have a far future
Expires
header, a conditional GET request is still made whenever the user hits Reload or Refresh.If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether. The
Last-Modified
header validates based on the component's timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:FileETag none
Make Ajax Cacheable
tag: content
One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won't be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It's important to remember that "asynchronous" does not imply "instantaneous".
To improve performance, it's important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:
Let's look at an example. A Web 2.0 email client might use Ajax to download the user's address book for autocompletion. If the user hasn't modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612
. If the address book hasn't been modified since the last download, the timestamp will be the same and the address book will be read from the browser's cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn't match the cached response, and the browser will request the updated address book entries.
Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.
Flush the Buffer Earlytag: server
When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.
A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.
Example:
...
...
Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.
Use GET for AJAX Requests
tag: server
The Yahoo! Mail team found that when using XMLHttpRequest
, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.
An interesting side affect is that POST without actually posting any data behaves like GET. Based on theHTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you're only requesting data, as opposed to sending data to be stored server-side.
Post-load Components
You can take a closer look at your page and ask yourself: "What's absolutely required in order to render the page initially?". The rest of the content and components can wait.
JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.
Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and theYUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look atYahoo! Home Page with Firebug's Net Panel turned on.
It's good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you've made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.
Preload Components
Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you'll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.
There are actually several types of preloading:
- Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a sprite image is requested onload. This sprite image is not needed on the google.com homepage, but it is needed on the consecutive search result page.
- Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly. On search.yahoo.com you can see how some extra components are requested after you start typing in the input box.
- Anticipated preload - preload in advance before launching a redesign. It often happens after a redesign that you hear: "The new site is cool, but it's slower than before". Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site
Reduce the Number of DOM Elements
A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.
A high number of DOM elements can be a symptom that there's something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more
A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser's defaults formatting. This is a chance to start fresh and think about your markup, for example use
The number of DOM elements is easy to test, just type in Firebug's console:document.getElementsByTagName('*').length
And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).
Split Components Across Domains
Splitting components allows you to maximize parallel downloads. Make sure you're using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on www.example.org
and split static components between static1.example.org
andstatic2.example.org
Minimize the Number of iframes
Iframes allow an HTML document to be inserted in the parent document. It's important to understand how iframes work so they can be used effectively.