Welcome to makedonikajournal.org

Web Hosting - The Internet and How It Works In one sense, detailing the statement in the title would require at least a book. In another sense, it can't be fully explained at all, since there's no central authority that designs or implements the highly distributed entity called The Internet. But the basics can certainly be outlined, simply and briefly. And it's in the interest of any novice web site owner to have some idea of how their tree fits into that gigantic forest, full of complex paths, that is called the Internet. The analogy to a forest is not far off. Every computer is a single plant, sometimes a little bush sometimes a mighty tree. A percentage, to be sure, are weeds we could do without. In networking terminology, the individual plants are called 'nodes' and each one has a domain name and IP address. Connecting those nodes are paths. The Internet, taken in total, is just the collection of all those plants and the pieces that allow for their interconnections - all the nodes and the paths between them. Servers and clients (desktop computers, laptops, PDAs, cell phones and more) make up the most visible parts of the Internet. They store information and programs that make the data accessible. But behind the scenes there are vitally important components - both hardware and software - that make the entire mesh possible and useful. Though there's no single central authority, database, or computer that creates the World Wide Web, it's nonetheless true that not all computers are equal. There is a hierarchy. That hierarchy starts with a tree with many branches: the domain system. Designators like .com, .net, .org, and so forth are familiar to everyone now. Those basic names are stored inside a relatively small number of specialized systems maintained by a few non-profit organizations. They form something called the TLD, the Top Level Domains. From there, company networks and others form what are called the Second Level Domains, such as Microsoft.com. That's further sub-divided into www.Microsoft.com which is, technically, a sub-domain but is sometimes mis-named 'a host' or a domain. A host is the name for one specific computer. That host name may or may not be, for example, 'www' and usually isn't. The domain is the name without the 'www' in front. Finally, at the bottom of the pyramid, are the individual hosts (usually servers) that provide actual information and the means to share it. Those hosts (along with other hardware and software that enable communication, such as routers) form a network. The set of all those networks taken together is the physical aspect of the Internet. There are less obvious aspects, too, that are essential. When you click on a URL (Uniform Resource Locator, such as http://www.microsoft.com) on a web page, your browser sends a request through the Internet to connect and get data. That request, and the data that is returned from the request, is divided up into packets (chunks of data wrapped in routing and control information). That's one of the reasons you will often see your web page getting painted on the screen one section at a time. When the packets take too long to get where they're supposed to go, that's a 'timeout'. Suppose you request a set of names that are stored in a database. Those names, let's suppose get stored in order. But the packets they get shoved into for delivery can arrive at your computer in any order. They're then reassembled and displayed. All those packets can be directed to the proper place because they're associated with a specified IP address, a numeric identifier that designates a host (a computer that 'hosts' data). But those numbers are hard to remember and work with, so names are layered on top, the so-called domain names we started out discussing. Imagine the postal system (the Internet). Each home (domain name) has an address (IP address). Those who live in them (programs) send and receive letters (packets). The letters contain news (database data, email messages, images) that's of interest to the residents. The Internet is very much the same.

Software copyright act The Software Copyright Act was a Great Step in the Right Direction The software copyright act, which is actually called the Digital Millennium Copyright Act has given software developers a little more power when it comes to protecting their works. If you've bought software in the last few years I'm sure you've noticed some of the changes that have been made in the software buying process. If not, then you really should wake up and take note. Some of the more noteworthy achievements of this act are the following: 1) It is now a crime to go around anti-piracy measures in software. 2) It is no longer legal to make, sale, or give away software or devices that were invented for the purpose of cracking codes enabling the illegal copying of software. 3) Limits the liability that ISPs (as far as copyright infringement violations) when information is transmitted online. The problem isn't the people want to be bad or do something wrong. Most of us by nature want to do the right thing. The problem lies in educating people to the fact that it really is stealing when you bootleg, pirate, illegally download, or otherwise acquire copies of software that you didn't pay for. It's one of those 'white lie' types of crimes for most people and they don't really see how it will hurt anyone for them to copy a game that their brother, cousin, uncle, or friend has. Someone paid for it after all. The problem is that at $50 plus being the average price for computer games and simple software if 10 million people are doing it, the numbers are staggering and they add up quickly. The software copyright act sought to protect businesses from losing money this way. The software copyright act was the worldwide response to a growing problem. This problem was so widespread with illegal downloading of music that lawsuits and massive commercial ad campaigns were initiated in order to curtail illegal downloading activities when it comes to music. It seems to be working to some degree. Fewer people are illegally downloading music; the downside is that these people aren't buying as much music either. The reason is because they are no longer being exposed to the wide variety of music and artists that they were getting freely when downloading music each night at no cost. This equals lower record sales and is becoming a problem of lower movie sales and software sales as well. People aren't trying new games like they could before the software copyright act by going to LAN parties and everyone sharing a copy to play, now everyone has to own a copy before they can play. While this may be great for the companies that make a few (a minimal few at best) extra sales on the games for the sake of a great party but for the most part, it is costing them the extra money that could be made by 10 people finding they liked the game enough to go out and buy it so they could play it whenever (and the next group of 10 they will introduce the game to) Gamers are a funny group and software copyright act or no, they are going to stick with the software and games that serve them best. The software copyright act was created in order to protect the rights of those writing and developing computer software. We want those who fill our lives with fun games, useful tools, and great ways to connect to friends and family to continue providing these great services and to get paid for the ones they've already provided. The software copyright act is one giant step in the right direction as far as I'm concerned.

Web Hosting - Sharing A Server – Things To Think About You can often get a substantial discount off web hosting fees by sharing a server with other sites. Or, you may have multiple sites of your own on the same system. But, just as sharing a house can have benefits and drawbacks, so too with a server. The first consideration is availability. Shared servers get re-booted more often than stand alone systems. That can happen for multiple reasons. Another site's software may produce a problem or make a change that requires a re-boot. While that's less common on Unix-based systems than on Windows, it still happens. Be prepared for more scheduled and unplanned outages when you share a server. Load is the next, and more obvious, issue. A single pickup truck can only haul so much weight. If the truck is already half-loaded with someone else's rocks, it will not haul yours as easily. Most websites are fairly static. A reader hits a page, then spends some time skimming it before loading another. During that time, the server has capacity to satisfy other requests without affecting you. All the shared resources - CPU, memory, disks, network and other components - can easily handle multiple users (up to a point). But all servers have inherent capacity limitations. The component that processes software instructions (the CPU) can only do so much. Most large servers will have more than one (some as many as 16), but there are still limits to what they can do. The more requests they receive, the busier they are. At a certain point, your software request (such as accessing a website page) has to wait a bit. Memory on a server functions in a similar way. It's a shared resource on the server and there is only so much of it. As it gets used up, the system lets one process use some, then another, in turn. But sharing that resource causes delays. The more requests there are, the longer the delays. You may experience that as waiting for a page to appear in the browser or a file to download. Bottlenecks can appear in other places outside, but connected to, the server itself. Network components get shared among multiple users along with everything else. And, as with those others, the more requests there are (and the longer they tie them up) the longer the delays you notice. The only way to get an objective look at whether a server and the connected network have enough capacity is to measure and test. All systems are capable of reporting how much of what is being used. Most can compile that information into some form of statistical report. Reviewing that data allows for a rational assessment of how much capacity is being used and how much is still available. It also allows a knowledgeable person to make projections of how much more sharing is possible with what level of impact. Request that information and, if necessary, get help in interpreting it. Then you can make a cost-benefit decision based on fact.