Intranet Vs. Internet

The age of information technology owes everything to the development of computer networking. It’s the linking up of computers on ever-widening scales, that has turned our long sought after dream of making the world’s knowledge freely accessible, into reality. If all computer networks are arranged in an ascending order of complexity and scaling, two types of networks will lie at extreme ends. One is of course the ‘Internet’ and the other, which may be functional in your company, is an ‘Intranet’.

Intranet Vs. Internet Comparison

Computer networking forms the fundamental basis of intranet and Internet. It enables the sharing of resources and information among a group of computers. The first operational computer network was created for the United States Department of Defense and was known as ‘Advanced Research Projects Agency Network (ARPANET). Since then, networking technology and architecture evolved in complexity and sophistication, to provide us with the internet. The synergy of advanced network computing hardware and sophisticated networking software based on Internet protocols has made this possible. As you will see ahead, an intranet and the Internet are two widely separated points on the scale of networking complexity, which serve to provide the most efficient form of information and resource sharing.

Definition

Intranet is an internal private computer network or connection of one or more computer networks, whose use and access is restricted to an organization and its employees or members. These kinds of networks are used for ease of information sharing and communication in companies. In industries, based on the information technology, intranets are absolutely necessary, as their work involves a high degree of data sharing and collaboration among computer users. Such an intranet, mostly operates through a website, run by a local server, acting as a resource sharing medium. You could call it a scaled-down private Internet. Transfer of data over such a private website based network may be controlled using the Internet protocols like HTTP (Hyper Text Transfer Protocol), SMTP (Simple Mail Transfer Protocol) and FTP (File Transfer Protocol). However, not all intranets use private websites. There may be organizations where intranets are meant for pure file sharing, with no private website or Internet protocol use required.

The very fact that you are accessing information through the Internet, demonstrates that you already have an idea about what the Internet or ‘International Network’ can do for you. Take small networks spread in a small region of your city, then integrate them with other such small networks to form local area networks (LANs). Then integrate all such local area networks to form a wide area network of computers. Next, integrate such wide area networks (WANs) into a national network. Subsequently, join all such national networks together. What you ultimately end up with, is the Internet! It is a single network that connects computers all over the world using Internet protocols, which make information sharing and routing possible. The main point of difference between intranet and Internet is the following. While intranet involves the networking of a few hundred computers, Internet is a network of more than billion computers spread worldwide! It uses an ever improving set of Internet protocols (HTTP, FTP, SMTP, etc.) to transfer data. Unlike the intranet, the Internet and the information sharing service called the ‘World Wide Web’, which we operate on, cannot function without these protocols.

Structure, Scale & Complexity

The structure of an intranet is similar in principle to the Internet. Both use a server-client structure and both networks transfer data using Internet protocols. However, they fundamentally differ due to the vast difference in networking scale and complexity.

With billions of computers sharing and transferring data amongst each other, the administration of the Internet is far more complex, as it is the mother of all computer networks. A large amount of networking and routing hardware is needed to connect all the computers worldwide. On the other hand, an intranet, being restricted to a few computers is easier to manage.

Server Control

One of the most fundamental points of difference is server control. An intranet is controlled by a single server, which can adequately handle all tasks. A single server or server cluster has absolute control over the entire network.

The Internet is operated by a linked set of billions of computer servers world wide. This is due to the sheer size of data that’s exchanged over the Internet, making it inevitable, that control centers be decentralized. It is the difference between the governing of a city and governing a nation. Decentralization is essential. Governed by a common architecture, servers spread world wide exchange data with client computers through the use of Internet protocols.

Uses

An intranet is built in an organization, to enable resource sharing and provide a rapid communication channel that efficiently connects team members and peers. Corporate intranets have restricted access controlled by user ids and passwords, which are not accessible to anybody on the outside. In some cases, external access may be granted through the use of Virtual Private Networks (VPNs) to let distant employees connect with the network. Intranets provide a limited set of services, which are required by the organization. Improvement in productivity, cost saving and rapid communication are some of the inherent advantages of an intranet.

The Internet and the intranet differ in their uses. Internet is a global network which has information sharing on a global level as its goal. Internet is more open in the sense, that everything shared here, is accessible to every person connected to it, all over the world. One of the major points of differentiation is the range of services offered. Internet offers all possible spectrum of services to its users, compared to the very restricted number of services offered by Intranets. From cloud computing, e-mail, FTP, world wide web, peer-to-peer data sharing to VoIP services and more, Internet caters to every global netizen’s needs.

The difference is primarily the scaling, complexity and manageability, while the fundamental principles and technologies underlying both networks remain the same.

Software Developer Salary

With the growth of the information technology industry, the demand for talented and skilled software developers has increased to a great extent. The software industry has achieved tremendous progress in the US and in Asian countries like India, creating many job opportunities. As a result, the junior and senior level salaries of these professionals continue to show a strong upward trend.

Pay Range

The salary of software developers largely depends on their years of experience, place of work, educational qualifications, and skills. Though the average salary is quite high as compared to several other professions, the entry-level pay might be low in some regions. The median salary is around USD 72,000 per year according to job market experts. An experienced programmer can earn around USD 60,000 per year. The pay of those having an experience of less than two years can be in the range of USD 35,000 to USD 45,000 per year. Those having an experience of three to five years can earn between USD 45,000 to USD 70,000 per year. Those who have been in the industry for eight to ten years can make between USD 65,000 to USD 90,000 per year. Whereas the senior professionals can earn in the range of USD 100,000 to USD 175,000 or even more.

Job Description

Software developers do the job of understanding and interpreting technical documents to create software applications. They also update the current tool and ensure that it is working efficiently. Senior developers have to take up the responsibility of monitoring, supervising, and checking the work of their juniors. They have to take care of the needs and requirements of the users while preparing the design. Conducting training sessions for the users, to learn the software are also a part of their duties. They are also responsible for creating test plans and technical specifications. They often work in teams or groups to complete the assigned tasks in time and within budget.

Requirements

In order to become a software developer, you need to have at least a bachelor’s degree in information technology or computer science from a reputed university. Securing admission in such a college would require you to score well in subjects like math, physics, and English in high school. A master’s degree in computer science can be the ideal way of entering this field, as this is what top employers generally look for. Initially, you might have to work as a trainee for three to six months before they take you on as a full-time employee. Knowledge of the latest software and programming languages, and having the relevant certifications can be an added advantage while looking for jobs in top firms.

Waterfall Model in Software Engineering

The waterfall model is probably the oldest and the best-known development models. The role of this model in software engineering is as important as its role in software testing. It forms the basic design, using which, over the years, a number of other software process models have been developed and implemented.

Waterfall Model and Software Engineering

The waterfall model is so named because it employs a ‘top-to-down’ approach similar to the water falling from a height under the influence of gravity. The following is a brief explanation of the different phases in the waterfall model.

Phases
For developing a software for small or large project, the waterfall model suggests that you employ the phases given below, in a step-by-step manner.

First and foremost, you need to completely analyze the problem definition and all the various project requirements. This phase is commonly referred to as ‘Requirement Analysis’. Once you have thoroughly and exhaustively identified and understood all the project requirements, they are to be properly documented, after which you move onto the next phase, which is known as ‘System Design’. This involves analyzing and specifying the project’s hardware and software requirements, and their inter-relation. In this phase, the entire software aspect of the project is broken down into different logical modules or blocks which are identified and systematically documented. ‘System Implementation’ is the next phase which involves writing the software code and actually implementing the programming ideas and algorithms which have been decided upon in the previous phase. Once the coding and implementation phase has been completed, the development process moves on to testing. This is precisely what happens in the next phase which is known as ‘System Testing’. The code that has been written is subjected to a series of tests, to detect and determine whether there are any bugs, errors or software failures. Once all the repair work, i.e. correcting and re-writing every piece of erroneous or flawed code is completed, you then move to the next and last phase which is the ‘System Deployment and Maintenance’. As the name suggests, the last phase is nothing but handing over the completed project to the client or customer, and subsequently performing maintenance activities, if needed, on a periodic basis.

Advantages and Disadvantages
Let us now examine the pros and cons of the waterfall model in software engineering as well as in software testing.

Pros
It is the simplest software development model and also the easiest process to implement.
This model is simple to understand and therefore is implemented at various project management levels, in a number of different fields.
It employs an orthodox, yet systematic and effective method of project development and delivery.
Cons
Since it is not an iterative model, it has its fair share of shortcomings and drawbacks.
Being a strictly sequential model, jumping back and forth between two or more phases is not possible. The next phase can be reached only after the previous one has been completed.
Bugs and errors in the code cannot be discovered until and unless the testing phase is reached. This can lead to a lot of wastage of time and other precious resources.
This process model is not suitable for projects wherein the project requirements are dynamic or constantly changing.

Software Development Life Cycle

Software development life cycle is a step-by-step process involved in the development of a software product. It is also denoted as software development process in certain parts of the world. The whole process is generally classified into a set of steps and a specific operation will be carried out in each of the steps.

Classification

The basic classification of the whole process is as follows :
– Planning
– Analysis
– Design
– Development
– Implementation
– Testing
– Deployment
– Maintenance
Each of the steps of the process has its own importance and plays a significant part. The description of each of the steps can give a better understanding.

Planning

This is the first and foremost stage in the development and one of the most important stages. The basic motive is to plan the total project and to estimate the merits and demerits of the project. The planning phase includes the definition of the intended system, development of the project plan, and parallel management of the plan throughout the proceedings.

A good and matured plan can create a very good initiative and can positively affect the complete project.

Analysis

The main aim of the analysis phase is to perform statistics and requirements gathering. Based on the analysis of the project and due to the influence of the results of the planning phase, the requirements for the project are decided and gathered.

Once the requirements for the project are gathered, they are prioritized and made ready for further use. The decisions taken in the analysis phase are out-and-out due to the requirements analysis. Proceedings after the current phase are defined.

Design

Once the analysis is over, the design phase begins. The aim is to create the architecture of the total system. This is one of the important stages of the process and serves to be a benchmark stage, since the errors performed until this stage and during this stage can be cleared here.

Most of the developers have the habit of developing a prototype of the entire software and represent it as a miniature model. The flaws, both technical and design, can be found and removed and the entire process can be redesigned.

Development and Implementation

The development and implementation phase is the most important phase, since it is the phase where the main part of the project is done. The basic works include the design of the basic technical architecture and the maintenance of the database records and programs related to the development process.

One of the main scenarios is the implementation of the prototype model into a full-fledged working environment, which is the final product or software.

Testing and Deployment

The testing phase is one of the final stages of the development process, and this is the phase where the final adjustments are made before presenting the completely developed software to the end-user.

In general, the testers encounter the problem of removing the logical errors and bugs. The test conditions which are decided in the analysis phase are applied to the system and if the output obtained is equal to the intended output, it means that the software is ready to be provided to the user.

Maintenance

The toughest job is encountered in the maintenance phase, which normally accounts for the highest amount of money. The maintenance team is decided such that they monitor on the change in organization of the software and report to the developers, in case a need arises.

The information desk is also provided with, in this phase. This serves to maintain the relationship between the user and the creator.

Reverse Engineering for Software Debugging

Reverse engineering in computer programming is a skill by which software can be reverted to its basic form, through a series of steps. The software is taken back to its source code level. Pretty often, software are not totally brought down to the source code level or simply cannot, but they are brought down till the assembly language level. Assembly language is a CPU understandable language which is different for different CPU architectures.

Assembly language has certain instructions known as assembly codes which define the flow of a program, the program structure, functions, etc. Everything that the software is capable of doing can be modified or deleted using these codes. Debugging is finding bugs in our software and correcting them, as and when necessary.

Debugging is most often done at development phase, which means when the software is being coded or developed. However, at times, some bugs and errors cannot be corrected at this phase. Some of these bugs can be identified and corrected when the concerned program’s source code is small but it becomes extremely difficult to correct bugs when the code is huge and complex. Reverse engineering can help programmers build better software by eliminating bugs by just understanding its techniques, procedures, and tools.

This process is not just about the bugs, but the entire aspect of developing software becomes absolutely crisp and perfect. Extensibility with the use of reverse engineering is also a major advantage, like we generally see patches being released by software companies for a security exploit or lack of required feature.

Today, many crackers are born on the information highway lanes who exploit and misuse technology. Crackers are people who reverse engineer software, not for the purpose of debugging but rather for breaking into it. They use its tools and techniques to hack authentication security mechanisms. Crackers steal passwords and patch software illegally, which they can automate by creating cracks. Cracks are small utility programs which are distributed across the Internet and emails, which help other people break security mechanisms of software with just a click of button, and without any prior knowledge.

Although this process has caused and continues to cause certain problems, but it is here to stay, to help and to build better software. As the old saying goes, “What’s good, is going to be broken!”, the only way out of the misuse of reverse engineering is to “outwit the cracker.”

Software Engineering – Reason and a Concept!

Some decades back, when computer was just born and was completely new thing to people. Very few people could operate them and software was not given very much of emphasis. That time hardware was the most important part that decided the cost of implementation and success rate of the system developed. Very few people were known to programming. Computer programming was very much considered to be an art gifted to few rather than skill of logical thinking. This approach was full of risk and even in most of the cases, the system that was undertaken for development, never met the completion. Soon after that, some emphasis was given towards software development. This started a new era of software development. Slowly, people started giving more importance to software development.

People, who wrote software, hardly followed any methodology, approach or discipline that would lead to a successful implementation of a bug-free and fully functional system. There hardly existed any specific documentation, system design approach and related documents etc. These things were confined to only those who developed hardware systems. Software development plans and designs were confined to only concepts in mind.

Even after number of people jumped in this field, because of the lack of proper development strategies, documentations and maintenance plans, the software system that was developed was costlier than before, it took more time to develop the entire system (even sometimes, it was next to impossible to predict the completion date of the system that was under development), the lines of codes were increased to a very large number increasing the complexity of the project/software, as the complexity of the software increased it also increased the number of bugs/problems in the system. Most of the time, the system that was developed, was unusable by the customer because of problems such as late delivery (generally very very very late) and also because of number of bugs, there were no plans to deal with situations where in the system was needed to be maintained, this led to the situation called ‘Software Crisis’. Most of software projects, which were just concepts in brain but had no standard methodologies, practices to follow, experienced failure, causing loss of millions of dollars.

‘Software Crisis’ was a situation, which made people think seriously about the software development processes, and practices that could be followed to ensure a successful, cost-effective system implementation, which could be delivered on time and used by the customer. People were compelled to think about new ideas of systematic development of software systems. This approach gave birth to the most crucial part of the software development process, this part constituted the most modern and advanced thinking and even the basics of any project management, it needed the software development process be given an engineering perspective thought. This approach is called ‘Software Engineering’.

Standard definition of ‘Software Engineering’ is ‘the application of systematic, disciplined, quantifiable, approach to the development, operation and maintenance of software i.e. the application of engineering to software.’

The Software Engineering subject uses a systematic approach towards developing any software project. It shows how systematically and cost-effectively a software project can be handled and successfully completed assuring higher success rates. It includes planning and developing strategies, defining time-lines and following guidelines in order to ensure the successful completion of particular phases, following predefined Software Development Life-Cycles, using documentation plans for follow-ups etc. in order to complete various phases of software development process and providing better support for the system developed.

Software Engineering takes an all-round approach to find out the customer’s needs and even it asks customers about their opinions hence proceeding towards development of a desired product. Various methodologies/practices such as ‘Waterfall Model’, ‘Spiral Model’ etc. are developed under Software Engineering which provides guidelines to follow during software development ensuring on time completion of the project. These approaches help in dividing the software development process into small tasks/phases such as requirement gathering and analysis, system design phase, coding phase etc. that makes it very much easy to manage the project. These methods/approaches also help in understanding the problems faced (which occur during the system development process and even after the deployment of the system at customer’s site) and strategies to be followed to take care of all the problems and providing a strong support for the system developed (for example: the problems with one phase are resolved in the next phase, and after deployment of the product, problems related to the system such as queries, bug that was not yet detected etc. which is called support and maintenance of the system. These all strategies are decided while following the various methodologies).

Hyper-Threading Technology

We all want our computers to be as speedy as they can be. There are many different ways to increase computer performance through different types of upgrades. Processors have become speedier because of demand and competition. To make processors fast, chipmakers have been creating new CPU architectures to process information and milk every ounce of processing power available. Intel created Hyper-Threading technology as an upgrade in CPU architecture and quietly integrated it into some of their processors for development and testing purposes.

It is based on the idea of simultaneous multi-threading technology (SMT), where multiple physical CPUs are used to process multiple threads at once. As an alternative to using multiple physical processors, Intel created multiple logical processors inside a single physical CPU. Intel recognized that CPUs are inherently inefficient and have lots of computing power that never gets used.

It allows multi-threaded software applications to execute threads in parallel. Consequently, resource utilization provides higher processing throughput. It is basically a more superior form of Super-threading that was first introduced on the Intel Xeon processors and was later added to Pentium 4 processors. This type of threading technology was not present in general-purpose microprocessors.

To boost performance, threading was allowed in the software by splitting instructions into multiple streams so that multiple processors could act upon them. By using this technology, processor-level threading can be utilized which provides more efficient use of resources for greater parallelism and improved performance on today’s multi-threaded software.

Hyper-Threading is a multi-threading technology in which SMT is achieved by duplicating the architectural state on each processor, while sharing one set of processor execution resources. It also produces faster response times for a multi-tasking workload environment. By permitting the processor to use on-die resources that would otherwise have been idle, it offers a performance boost on multi-threading and multi-tasking operations for the microarchitecture.

In a CPU, every clock cycle has the ability to do one or more operations at a time. One processor can only handle so much during an individual clock cycle. Hyper-Threading permits a single physical CPU to fool an operating system, capable of SMT operations, into thinking there are two processors.

It produces logical processors to handle multiple threads in the same time slice, where a single physical processor would normally only be able to handle a single operation. There are some prerequisites that must be satisfied before taking advantage of this technology. The first prerequisite is that you must have a Hyper-Threading enabled processor, HT enabled chipset, BIOS and operating system. Further, your operating system must support multiple threads. Finally, the number and types of applications being used make a difference on the increase in performance as well.

Hyper-Threading is a hardware upgrade that makes use of the wasted power of a CPU, but it also helps the operating system and applications to run more efficiently, to do more at once. There are millions of transistors inside a CPU that turn on and off to process commands.

By adding more transistors, chipmakers typically add more brute force computing power. More transistors equal a large CPU and more heat. The technology is aimed at increasing performance, without significantly increasing the number of transistors contained on the chip, making the CPU footprint smaller.

It offers two logical processors in one physical package. Each logical processor must share external resources like memory, hard disk, etc. and must also use the same physical processor for computations. The performance boost will not scale the same way as a true multiprocessor architecture, because of the shared nature of Hyper-Threading processors. System performance will be somewhere between that of a single CPU without Hyper-Threading and a multi-processor system with two comparable CPUs.

This technology is independent of any platform. Some applications are already multi-threaded and will automatically benefit from this technology. Multi-threaded applications take full benefits of the increased performance that this technology has to offer, permitting users to see immediate performance gains when multitasking. It also improves reaction and response time, and increased number of users a server can support. Today’s multi-processing software programs are compatible with Hyper-Threading technology enabled platforms, but further performance gains can only be realized by specifically tuning the software to utilize it. For future software optimization and business growth, this technology complements traditional multi-processing by providing additional headroom.