Explore the world with Virtual Reality

Virtual Reality is a topic we hear all too often when we talk about technology in the future world. What is this Virtual Reality? Keep on reading to find out about the 'Virtual Reality in the Future World'.



Simply, Virtual Reality is a computing platform. Decades ago, mainframe computers, then personal computers, and in the most recent history, the web computing platform & the computing platform we use today is the mobile computing platforms. With the advancement of technology in the next few years, the mobile computing platform will be outdated as well and the concept of Virtual Reality will be the next computing platform to be popularized in the world.

Virtual reality is a technology that can convince our brain to be in an artificial environment. This allows one to perform an activity in the artificial environment in such a way that it is perceived as real. Often this artificial environment is created using 3 or 4 displays. So these sensors use electronic devices with special sensors to sensitize the brain to the real environment. Special helmets with a screen, various tools with sensors can be mentioned as the electronic devices that help us to experience Virtual Reality.

History of Virtual Reality…

If one considers the history of this technology, the origins of this are the device named 'Sensorama', invented by Morton Helig in the 1950s. This gave us the opportunity to watch TV in 3D. After a few more years, the device named 'Headights', and the 'Ultimate Displays' that came to the world created the background for a virtual reality-related system a few years later.



Types of Virtual Reality

There are basically two types of virtual reality.

     1. Immersive Virtual Reality
     2. Non-Immersive Virtual Reality

Immersive Virtual Reality

This allows people to interact with the built environment. People who are confronted with that environment feel like they are truly a part of that environment, and are able to walk in the artificial environment, interact with one another, and engage in various activities.

Non-Immersive Virtual Reality

This allows people to interact with the built environment using a mouse, keyboard, joystick, trackball or dataglove or spaceball. The person is not able to collide with the environment, but the screen can look very similar to the real environment.


Use of Virtual Reality in the Modern World


Today, Virtual Reality is a technology that is used in many fields of research and day-to-day work. This technology is often used in the manufacture of automobiles and machinery. This technology helps them to see if their products are working properly and to address the shortcomings of their products without actually creating them. Similarly, Virtual Reality is also used to train officers and combat pilots in warfare. This technology enables students to get a better understanding of surgeries and to understand the internal functioning of the body, its various mechanisms, and processes.

The following video explains how to use virtual reality technology in relation to a vehicle manufacturer. You will learn more about how to use this concept.


Final Thoughts...

Thus we see that the concept of Virtual Reality is already being used widely in the world. The basic cost associated with this technology is so high that it is still commonly used only in countries where the technology is in such advanced and can bear the cost. But the prevailing opinion in the world today is that technology will be virtually indispensable for another decade.

Till we come to you with a new article, Bye for now! Don't forget to share our articles with others and give us a support.

Thank you...



Access internet in the space with Interplanetary Internet

In the present, the Internet is not just a facility that can be enjoyed on earth. With the development of technology, the Internet is no more restricted to fly far away from our earth. But actually, when the internet goes beyond our planet, it becomes an Internet connection between two or more planets. So that Internet connection must be compatible with the requirements which are needed to use the internet from a far distance from the earth. Also, when communicating with different planets there cannot be any data loss or any delay. ‘Delay Tolerant Network’ (DTN) is one of the above-described network connections. It is so much similar to the technology used in the Internet networks but used above the earth’s boundaries. Therefore, it is called the ‘InterPlanetary Internet’. 


Internet on Earth vs. InterPlanetary Internet

The Internet has been built by the controller computers and millions of custom/ normal user computers including the server computers that carry all the data and information exchanged through the internet.

Likewise, InterPlanetary Internet also has been created with a collection of many main devices and many other gadgets including landscaping centers, orbital planes, landers & rowers. They have been implemented on other planets as well as on earth. By the time, NASA has already planned a model of a such system to create a network between the Earth and the Mars. So it has been named as ‘Mars Network’. Here many small satellites are being sent to the axil where all of them are connected to the landscaping centers on Mars. All them are being connected to one of the main satellites within the axilla of the Mars and it is being connected to a satellite in the axilla of the Earth. From that satellite, received data are being retrieved by a landscaping center on the earth. So, as anyone can understand, Interplanetary Internet; how it works, the technology used, desired final results & etc. are so much different from the normal internet being used on the earth.

The Internet is being executed according to the technological protocols which were designed and developed specially to carry out the Internet on the earth. ‘IP’ & ‘TCP’ are the major among those protocols. Likewise, a special protocol such as ‘BP’ (Bundle Protocol), ‘DTN’ (Delay-Tolerant Network) is needed to execute the InterPlanetary Internet.

A very small delay in InterPlanetary Internet can be a delay that takes a few days to recover. So it has to be so efficient and effective although it is being used by few scientists; not like the Internet which is being used by the trillions of people on earth. 


Development of InterPlanetary Internet

Finding a proper technology for all the existing requirements is the most challengeable thing when it comes to this subject. Mainly, there are two facts that need to be solved-out.
Developing a protocol which suites for all the spacecrafts & control centersAbility to share data & information without any delay
The US space agency is working on developing a such technology. One of their teams is working on this at ‘Goddard Space Flight Centre’. It has been more than 15 years since this project; named ‘CANDOS’ (Communication and Navigation Demonstrations and Shuttle) has begun. Their idea is to create Interplanetary Internet by developing the existing internet on earth.

The other team is placed at the ‘Jet Propulsion Laboratory (JPL) at NASA. Vint Cerf; who is known as the father of the internet is leading this team. Their idea is to develop special protocols and with the use of them create the Interplanetary Internet.

Developing everything from the baby-step is not easy as we think. It takes a lot of time and wealth. But creating the Interplanetary Internet by developing the existing Internet will save the time and the cash as well, as it is a place which is being developed and maintained for many years and has many resources to use. Anyway, this process needs to be done properly and we hope that it’ll be completed as soon as possible and we’ll be able to see how it works. Let’s keep on looking…


Learn more about InterPlanetary Internet


# CCSDS – Consultative Committee for Space Data System 
     (https://public.ccsds.org/default.aspx)

# SCPS    – Space Communication Protocol Specifications 
     (http://www.scps.org)

# DTN     – Delay Tolerant Network
     (https://www.nasa.gov/content/dtn)

# LCRD   – Laser Communications Relay Demonstration 
     (https://www.nasa.gov/mission_pages/tdm/lcrd/index.html)

# SCA      – Space Communication Architecture
     (https://www.nasa.gov/mission_pages/tdm/lcrd/index.html)

Access high speed internet with WiMAX Technology

There are many ways to access the Internet nowadays. Broadband, Wi-Fi & Dial-up are the most popular among them all. But still, there are at least one or two weaknesses of those connection types.

Ex:- 
Broadband – Very expensive
Wi-Fi – Not accessible from everywhere. Need to be in the Wi-Fi hotspot area
Dial-up – Speed is so low

Therefore a new method to connect to the Internet was needed to be found excluding the weaknesses of the current methods. Some of the features expected from that new method are as follows.

# Must be speed than the others
# Must be accessible from anywhere; under any circumstance
# Must not be expensive and able to plant it easily

A new technology with all the above-mentioned features has been developed since 2001. By the time it has become much popular in the present as well. That technology is not anything else. It's the ‘WiMAX Technology' (Worldwide Interoperability for Microwave Access). By today, to get a speedy broadband connection & to provide internet access to the countryside, this WiMAX technology is being used heavily. Even millions of the crowd all around the world are accessing 4G broadband internet connections through this WiMAX technology. In the Asia Pacific, 29% of the broadband connections are being provided by this technology.

WiMAX System

The system developed to use this WiMAX technology is a combination of two parts.

WiMAX Tower – It is also like a mobile connection tower. A WiMAX Tower can provide broadband internet connection to a wide area of 3000 Square miles.

WiMAX Receiver – It is a small device with an antenna. There's a special module in it; a PCMCIA card. In some modern laptops, it comes as a built-in feature.

How WiMAX Works in a Network Environment

WiMAX TechnologyIn this system, a transmitter/ WiMAX tower is being connected with the Internet Service Provider (ISP) using cables and that tower can provide internet to its nearby area. Also, these towers can provide internet connection to the ‘backhaul towers’ placed straight-away to them through a wireless connection using microwaves. So, it shows that WiMAX towers provide wireless internet connection in two ways.


Line-of-sight Transmission – Any WiMAX receiver such as dish antenna & etc. situated in one straight line to the WiMAX transmitter tower.

Non-Line-of-sight Transmission – In this method, any WiMAX receiver closer to the transmitter can access the internet through this WiMAX technology. Here you can use a special modem to receive the signals and distribute it to the other devices at your place through cables or connect the computers sprightly to the transmitter using a small antenna and build a straight connection between them.

WiMAX Standards

Data transferring through WiMAX technology is happening according to a Word Wide recognized standards and the terms of IEEE (Institute of Electrical and Electronic Engineering). In WiMAX technology, they’re using the IEEE 802.16 technology convention. So if there’s any data & information transfer system under IEEE 802.16 standard, it’s only the WiMAX technology.

Accessible Distance
From the main center, it allows sharing data and information within 30 miles (50Km) in a high speed of 70 megabits using the microwaves in between 2GHz to 66GHz. So this technology is most suitable for a Metropolitan Area Network (MAN) because of its ability to transfer data within a wide area using the WiMAX transmitter. Bluetooth is only useful within a small range of area of 100m which is most suitable for a Personal Area Network (PAN). The maximum distance where Wi-Fi is accessible is limited only to 300 feet distance, making it more suitable for a Local Area Network (LAN). Neither Bluetooth nor Wi-Fi is able to provide internet connection to a much wide area and therefore WiMAX is the best option.

WiMAX History

In a 1990th decade, scientists were working on discovering a new technology which is able to be used to share data within a wide area. At that time there were only two wireless broadband technologies being used as LMDS (Local Multipoint Distribution Service) & MMDS (Multi-Channel Multipoint Distribution Service). Among them, scientists understood that LMDS method can be developed much more and it is what that developed up to the IEEE 802.16 standard. As soon as it developed as the IEEE 802.16 technology standard, a wise team was needed to keep the technology being developed and modified. So I 2001, WiMAX forum (wimaxforum.org) was founded. The name ‘WiMAX’ was assigned to the IEEE 802.16 standard by this forum.org. All the companies that produce devices and provide services connected to WiMAX technology are members of this wimaxforum.org.  This association is being continued as a non-profitable organization while being so popular in developing technology standards nowadays.  

Father of WiMAX Technology

It was 'Arogyaswami Paulraj', an Indian Scientist who did the best finding while developing the LMDS technology to the WiMAX technology by introducing the MIMO Wireless Theory. Without that theory, there was not a single possibility to develop this WiMAX technology and that’s why Arogyaswami Paulraj is being respected as the Father of WiMAX Technology.



Whta is this high speed data transferring method? || Transferjet Technology

'TransferJet Technology' is a wireless way of exchanging data and information in a speed matching the modern technology. TransferJet Con Sotiyam Organization can be mentioned as the founder of this TransferJet Technology. They announced about it in the very recent past, on the date of 8th of June 2017.


Why TransferJet?

This technology of exchanging data & information has been named as TransferJet in the manner of exposing the idea that it is able to exchange data & information in a similar speed of a jet. The main specification of this technology is also that speed and the way it exchanges data.

It is totally different, the way how the devices with the TransferJet technology exchange data between each from the way how all the other wireless devices exchange data. But for the users, it is not a complicated activity to transfer data from one device to another using this technology. Data tranferation begins as soon as the devices with this technology get closer to each. Transferation gets more speed if the two devices get touched with the other. It is like a wireless charging system. There the phone battery gets charged when the phone touches the surface of the wireless charger. TransferJet is also something similar to that technology where the data can be transferred more quickly at a speed of a jet when the two devices touch each other. What a cool way of exchanging data? It's up to you...


How speed it is?

Through this technology, data can be transferred at a maximum speed of 10Gb per second (10Gbit/s). That means to share a 1GB file between two devices, it only takes one second. Data stored in a computer is measured in Mega Bite (MB), while the data transferring speed is measured in MegaBit (Mb). So if a 1MB file is being shared, the transferring speed is 8Bit per second. Therefore to share 4K or VR videos wirelessly, it only takes very few seconds. Even to share all the data & information stored on an entire DVD, it takes less than 5 seconds. All these happen in these speeds when the two TransferJet devices are touched the each other. If there's a gap of 3cm between the two devices, the speed of sharing data is 7500Mb (7.5Gbit/s) per second where the transferring speed of a USB 3.0 port is only 5000Mb (5Gbit/s) per second. So anyone can understand the speed and the benefits of this TransferJet Technology.


Is it Safe to share data using TransferJet?

Safety is one of the main facts that anyone cares about when it comes to the modern technology. Regarding the TransferJet technology, there's no reason for anyone to worry about security risks. The main reason is that you can share things through this technology only within a limited area with a distance of 3 cm. So only no one from a longer distance can hack into the connection and steal data which is being shared through this technology. 

Also, there's an inbuilt security system which allows the user to choose the devices need to pair with to share data and decline the other devices. So although there are more devices powered with TransferJet Technology, you can transfer data to the previously selected devices. This technology got the ability to select and pair only with the previously commanded devices. So that makes it more secure to use this technology to share data & information.


Technogy used in TransferJet

TransferJet is a wireless technology which is used to share data & information within a small area. So we can say it's a 'CPWT' (Close Proximity Wireless Technology). Also, it is a 'Wide Band Technology' as well as an expressway of sharing data.

In this technology, radio waves are being used to share data. Its frequency is 4.48GHZ. Likewise, using low spear frequencies helps it to stay away from other communication systems without annoying/ avoid interference.


TransferJet History

On 21st of August in 2008, TransferJet was demonstrated as a technology that can be used practically in a useful way, by the Sony at their own labs in Tokyo. There, they shared photos in a camera with a huge computer screen without any wired connection saying that they found a new way of sharing data among closer devices. That's how this technology was born.



Final Thoughts...

Adjusting themselves automatically to share data, this technology has become much popular in many countries today. With the 802.15.3e standard of the IEEE, TransferJet Consortium is also working hard on developing this far more. On 8th June 2017, they announced that they're improving this technology to make the mobile communication networks more effective & efficient. That new technology is standardized as 'TransferJet X'. Likewise, it has become much useful, as a technology which is used to share data & information among closer devices without any wired connection. So hope you'll also find it so useful. And don't forget to subscribe to this blog and if there's anything you want to know or you know that I mentioned here, simply leave a comment in the comments section below.

Leant about Processor Technology/ Microchips

All the privileges of a computer mainly depend on a microchip which is called as a ‘Processor’. Simply it is a burnished Silicon plate where transistor circuits are arranged but comes with an amazing power. In the recent past, processors was not a much popular topic among people. But soon it became much popular with the improvement of the processors and because of the increasing competition among the processor manufacturing companies. There are only few processor manufacturing companies all around the world. ‘Intel’ and ‘AMD’ are the most popular out of them.  

Although Intel stood much forward, with the new processor set introduced by the AMD, the situation has bit changed. This new processor of AMD Company was introduced in late February in 2017 and it was named as ‘Ryzen’. These Ryzen processors are in the competition with the new processor introduced by the Intel Company on 3rd of January 2017, which was named as ‘Kaby Lake’. It has been more competitive because the Ryzen processors and Kaby Lake processors tit for tat. But when it comes to the subject; efficiency of the processors, Ryzen processors are the best among all the other processors at the moment. Keep on reading. Let’s run back to the ancient time of these processors first of all.

Microchip History


When it goes back, nearly for 70 years from today, on 16th of December in 1947, William Shockley, John Badin, and Vaulter Breton found the transistors for the Bell Labs in the USA. With this new innovation of transistors, they were replaced with most of the parts, such as the valves or the vacuum tubes, in electronic devices during those days.    

First generation computers came with the valves which are also called as vacuum tubes. But from the next generation, that means from the second generation, computers came with the circuits with transistors. These transistors were modified for the electronic computers which means the third generation computers. It was done by Robert Noyce and Jack Kilby.   

Jack Kilby was from the Texas Instruments in the USA and Robert Noyce was from the Fairchild Semiconductor Inc. In 1961, both of them separately found that all the transistors, resistors, capacitors, and the wires connecting them, can be reconstituted on a single semiconductor/ chip. This is the first impression of the innovation of Processors/ Microchips.   

In 1968, Robert Noyce, Gordon Moore, and Andy Grove created a company and it was named as Intel. Computers were taken to the fourth generation from the third generation by them with their creation of the World’s First Microchip in 1971 which was named as ‘Intel 4004’. There were 2300 transistors included in that microchip.



Moore’s Law of Microchips

Although the world’s first microchip came with only 2300 transistors, that amount began to increase immediately. Gordon Moore, who considered this situation, wrote an article to the ‘Electronics Magazine’ published in the USA on 19th April 1965. There he described that the amount of the transistors that can be included in a microchip doubles in every two years. This showing has been satisfied even for today. This saying of Moore is known as the ‘Moore’s Law’ today.



How The Microchips Are Being Made?

To make microchips, Sea sand is being used. Silicon bars with different diameters are being made after cleaning the sand until they are 99.9999% siliconized, by warming them in a high heat. Those silicon bars are called as ‘Ingot’. From those silicon bars, thin slices with a thickness of about 1/30’’. Then those slices are being belayed smoothly. Such a silicon plate is called as a ‘Polished Wafer’.  

Then an electric resisted, Silicon Dioxide [SiO2] layer is being applied to this plate and then a Photoresist Layer is also being applied. Next step is, sending ultraviolet rays on to this silicon plate. This method of making chips by sending ultraviolet rays is called as ‘Deep Ultraviolet Optical Lithography’. This light is being sent through a block of the silicon plate which is called as a ‘Mask’. The single chip piece created after that procedure is called as a ‘Die’. When the ultraviolet rays hit the light-sensitive chemical [Photoresist Layer] applied to the plate, those areas get congealed. Then a high-speed blast is being sent on to this Wafer and it blows-out the unnecessary things on it, keeping the congealed chemical parts.  

After the completion of these steps, a small circuit is created on that wafer. That circuit is so small where the measurements can only be done in Nanometers. After all, the wafer is being covered with an Aluminium plate. Then this chip can be divided into the parts considering the purpose.



Transistors and Microchips

A transistor is the main electronic switch of a Chip. All the microchips have been made consisting many of these transistors. With the method Jack Kilby and Robert Noyce found of creating a microchip, more transistors are being placed on a small room of a chip and make a microchip.  

When trying to place more and more transistors on a small surface, the abrasion paths must be tiny as well. The size of these paths on the modern microchips is 14nm. Therefore nearly 14 million transistors can be placed on a 1mm2 surface of a processor. From 2014, IBM Company is producing these type of processors. Intel Kaby Lake and AMD Ryzen are also same as those processors.



Modern Processor Technology

Processors with a special structure that can handle the orders given by the modern Operating Systems and programs are there on the trend today. They come with a moderated structure [ISA – Instruction Set Architecture] and technically they are considered as the processors with X86 architecture. This architecture is also found by the Intel Company. But today, there are many other companies also use this technology. As told in the very beginning, Kaby Lake and Ryzen Processors are the most suitable examples for this.



Final Thoughts...

Single core processors have been developed by the time and now there are Dual core, Quad core, Octa-core, Hexa-core and even Deca core processors. With their development, the speed, efficiency & effectiveness also has been increased and their prices as well. So the computers come with that kind of developed processors are far better than the others. Almost all the quality and the demand of a computer is based on this small microchip. So having a computer with a good processor is awesome than having a computer with better equipment such as an expensive keyboard, mouse, screen, but with a poor quality processor. Hope you found at least a single fact that helps you to improve your knowledge. Subscribe to this blog and be in touch to be the first to read the next post. And don’t forget to leave a comment below about what you think of this post, and if there’s anything you know more than the facts described above, be humane to share that knowledge with all the other readers as well by leaving a comment in the comment session below.


What is Unix OS?

It is almost half a century passed from the date where the Unix Operating System was first invented. Its 49th anniversary is to be celebrated in August this year (2018). Although it is nearly 50 years old, we cannot find it as a useless operating system as it is being used with the execution of even a small code of the Unix OS by each and every person who uses at least a smartphone or who is connected to the internet and do their work. Even any OS in the modern trend has the influence of this Unix OS at any stage of its evolution.

Example :- Windows Operating System is being executed on the stepping-stone of the communication Stack designed with the Unix Operating System which is used to temporally store data and information of the communication process.

Example  :- Most of the parts of the Apple OS X, has been designed using the coding systems of the Unix OS.

Example  :- Lying-in-home of the Linux Operating System is Unix OS.

Example  :- Most of the web servers on the internet which store data and manage them accurately, are also invented using the Unix OS.



Birth of the ‘UNIX Operating System

In the beginning of the 70th decade, Massachusetts Institute of Technology (MIT), General Electric (GE) and Bell Labs in conjunction commenced a huge mission to invent a new Operating System for the ‘Mainframe Computers’ used in the 60th decade. The mission was named as ‘Multics’ which stands for ‘Multiplexed Information and Computing Service’. MULTICS was a large, ambitious and innovative operating system that was intended to support hundreds of simultaneous users. It was well structured with many new features included than the feature was there before. The most important fact is that the Security of the Computer has been considered well in that plan.

The invention of the Multics Operating System was begun with the idea of making it a commercial product. Although it was sold to many customers, it was unable to succeed as a commercial product. Therefore, the maintenance process of the Multics was totally killed and all the resources used to invent the operating system was thrown away. It brought an intermission to all the scrutineers worked on the manufacturing process of the Multics Operating System.

There was a member from the Bell Labs scrutineers group who took a part in the invention process of the Multics OS. His name was ‘Ken Thompson’, who was a PC game lover. During the intermission period, he did something useful by making a PC game named as ‘Space Travel’. But in mainframe computers, its executing speed was so slow where Ken Thompson had to invent a new Operating System, which can be used to play his game accurately. So he wrote the codes for his new operating system, for nearly one month and it was done coding in August 1969. The codes Ken Thompson wrote was the main code chart of the Unix OS.

There was another member from the Multics Project, who was side by side with Ken Thompson in the process of creating his new OS. ‘Dennis Ritchie’, who was engaged with improving Software for PDP-7 computers at DEC (Digital Equipment Corporation) was that one who helped Ken Thompson to improve the Operating System he invented, for the PDP-7 Computers. All these were happening to make the PC game Ken Thompson created, work more accurately in Computers. ‘Brian Kernighan’ was also from the Multics OS manufacturing team, who passed the idea of developing the OS Ken Thompson invented, for DEC PDP-7 computers. The name ‘UNIX’ was given by him for that new Operating System.

In 1971, AT&T Bell Labs funded for the further improvements of the Unix Operating System. The name ‘Unix’ was officially accepted in this year. Also, in this year it was improved with ‘Text Processing’ feature and the ability to use it by many at the same time.

By 1973, it was rewritten in ‘C Language’, to make it easy to add new features to this operating system. By this time it has become more famous among Universities and technical organizations. Bell Labs sold a booklet of the complete code chart beside with the OS to those who bought the OS and earned a quite profit.

In 1975, a new update of the Unix OS was released as ‘Unix 6’. In 1980, there was the 10th updated version of the OS. In 1980, Microsoft built a UNIX based Operating System for the 16-bit Microcomputers. Windows was invented with the influence of that new operating system which was named as ‘Xenix’.

Evolution of ‘UNIX’

By modifying the sources code or the major code chart of the Unix Operating System, new versions of the Operating System were invented and it mostly happened after the year 1982. All the software needed to work with a computer were connected to the hardware of the computer with the use of the source code of the OS which was known as the Unix Kernel.

In 1982, University of California found a new OS named as ‘BSD Unix’ (Berkeley Software Distribution of Unix). Bill Joy was a member of that team which invented the BSD OS. In 1982 he built-up an organization on his own as ‘Sun Microsystems’, inventing Unix based ‘SunOS’ operating system.

Thereafter, so many new modified versions of the Unix OS was found and the ‘NeXT OS’ of the Apple company was one of the special version among them. That NeXT OS was the base of the Mac OSX which was constructed for the Apple Computers. Some parts of the source code which was written by Dennis Ritchie are there in the Kernel of the Mac OS, even for today.

Although the kernel of the Unix OS is an Open Source cord chart, there are Intellectual Property Owners for the Unix OS. Therefore no one is allowed to modify and to release new versions of the Unix OS unless they get permissions from the party who own the regulations regarding the subject. Todays, it is handled by the ‘Open Group’ Organization. (Opengroup.org)

Unix VS. Open Source

Richard Stallman, who was a Programmer at MIT (Massachusetts Institute of Technology), came up with an idea of finding a way which allows anyone to download the source code of the Unix OS for free and let anyone modify it on their own, rather than keeping it in a way to be bought spending cash. So he began to build that kind of an OS. It was named as ‘GNU’. GNU is also an OS like Unix. All the essential features of an OS such as Kernel, Driver Programs, User Interface, and System Utilities were there in the GNU as well. With the head-start of writing the codes, in 1983, Richard Stallman gave the birth for the ‘GNU.ORG’.

In 1984, he began another organization called, ‘FSF.ORG’. He commentated that the Open Source Software means the computer programs which can be used, copied, examined, modified, and improved by anyone, with no restrictions or regulations regarding the Intellectual Property Ownership. A statement containing this definition was offered in 1985 which was identified as the ‘GNU Manifesto’. The kernel of the GNU OS was named as ‘Hurd’. This community supported Hurd Kernel was done improving in 1991.

Unix VS. Linux

In 1987, Andrew Tanenbaum; a computer scientist of the Massachusetts Institute of Technology, intended an Operating System named ‘Minix’, based on the Unix OS. It was intended to make clear the doubts of his students regarding the process of an operating system. Also, he wrote a booklet about this Minix OS and shred a floppy disk containing the kernel of the Operating System with that booklet. It was shred through the network called as ‘Usenet’ and became so popular within a very short period of time. Within the first 3 months of the birth of Minix Operating System, about 40 000 of the population have had begun discussions about this new OS and ways of improving the File System of the OS.

Linus Towards; a student who was studying computer science at the Helsinki University at Finland, was also one of them who was engaged in the improving process of the File System of the Minix OS. Not only that, he used this Minix OS for his work, mastering the knowledge of him regarding how to modify it for different purposes. With this influence and the knowledge, on 5th of October 1991, he published a fresh kernel of a new Operating System to the world. It was named as ‘Linux’, which we use even in the present, and the base of this Linux OS is also the Unix OS, as it was intended using the Unix’ File System.

Linus Towards share the Kernel of his Linux OS and gathered a huge crowd to improve it. Linus was already engaged with the community of Open Source Software and he worked on combining the Linux OS with the GNU OS. As a result of partnering with Richard Stallman, the kernel of the Linux OS was taken as the kernel of the GNU OS and the GNU/Linux Operating System was born. The Linux operating system, which most of the population talk today is this GNU/Linux System.


With this head start, the process of producing operating systems using the kernel of the GNU/Linux OS was begun. Likewise, in 1993, ‘Debian’; the world’s first completed GNU/Linux based operating system was designed. Thereafter, many of such operating systems were designed and they are called as ‘GNU/Linux Distributions’. ‘Ubuntu’, ‘Android’, are on the very high of the list of GNU/Linux Distributions, which is completely filled with many modified versions such as, ‘Chromium OS’, ‘RedHat’ & ‘Suse’. All of them are invented with the influence of the Unix OS.



Finally…

From the birth of the Unix OS in August 1969, it ran on a long path of success with so many achievements. Codes of the Unix kernel have been used in any of the computers used in nowadays. Ken Thompson; the father of the Unix OS improved it with the help of Dennis Ritchie. Brian Kernighan gave the name ‘Unix’ for the OS Ken Thompson and Dennis Ritchie designed together. It was completely re-written in ‘C Language’, by 1973. In 1975, a new version of the Unix OS was released as ‘Unix 6’. ‘Xenix’ was produced by the Microsoft for their 16-bit Microcomputers, in 1980. Windows was designed with the influence of this Xenix OS. ‘NeXT OS’ of the Apple Company was also designed based on the Unix OS. Richard Stallman created the GNU OS and is the founder of the GNU.org and the FSF.org. In 1991, Linux OS was launched which was designed based on Unix OS. The first OS of the GNU/Linux Distribution was produced in 1993 which was named as ‘Debian’. All those versions of the GNU/Linux Distribution are based on the Unix OS. Likewise, Unix OS came a long way for nearly half a century, with a great pride.

In 1983, Ken Thompson and Dennis Richie were awarded the ‘Turing Award’ which is believed as the Nobel Prize of Computer Science, for designing the Unix Operating System. The ‘National Medal of Technology’, which is the highest award offered for an American Technologist, also won by these two for the year 1998 and it was given by the American President, Bill Clinton on 27th of April in 1999. This is what we talk as the great pride of the Unix OS.



Learn about 3D technology easily

Nowadays’ trending way of delivering movies to the audience is, using 3D technology. Actually, what is 3D technology? 3D is the abbreviation of the meaning ‘Three-Dimensional’. It is an image that provides the perception of depth into your brain as well as the height and breadth, which hands-over a special experience called as ‘Virtual Reality’. Everything around us in the real world, we see with our eyes are in 3D. But when it comes to the movie technology, we are able only to experience 2D there on the screen. That means we are not able to get an idea about the depth of the objects on the screen. But with the improvement of the technology, now the technicians have found a way of delivering movies with the ability to find a depth of the objects on the screen and that technology is called as, ‘3D Technology’. That’s what actually meant by 3-dimensional technology. Now let’s move on to see how this 3D technology works and the concept or the theory behind this 3D technology. So keep on reading.


3D Concept

The concept behind delivering 3D images, in a 2D platform using 2D graphics, is not complex as much as you think. So let’s get a brief idea about that concept.

In the natural world, we think that we see 3D graphics. But actually, our eyes see the 2D graphics. That means graphics only with a height and a breadth with no depth. But as our eyes are placed in two places they see any of these 2D graphics, in two different angles. These data of the 2D graphics which was collected with eyes, go to our brain separately as the data from the left and the right eye. After processing these data, our brain gives a sensation of the 2D graphics captured by the eyes, in 3D; with a depth. Likewise, we see the graphics with a depth in the real world. That means an illusion of length perception, is being created in our brain with the 2D graphics captured with our eyes. That’s how we see 3D in the real world.

That’s the theory behind the 3D technology trending today. Even a 3D movie is being shot, using two cameras in two perspectives. Those cameras are also keenly attached in a similar distance of our eyes are placed on our face. So the scene can be shot in two different angles like our eyes truly see that. There are well-developed cameras to capture such scenes and they can be placed right in the correct place using the special software and other programs developed especially for 3D shootings. And with the help of after editing, a proper 3D movie can be produced. That is the basic concept behind the 3D technology which is mainly used in 3D movie making.

3D Spectacles

Either in a cinema or in a TV screen, we need to wear an eye cheating spectacle. They are called as 3D Spectacles. There are two types of 3D Spectacles.

Active 3D Spectacles        - Uses electronic devices
Passive 3D Spectacles     - Not using electronic devices

Passive 3D Spectacles

Anaglyph Technology is being used in 3D spectacles with red and blue glasses with are used to watch 3D movies. 3D movies are being delivered to the screen using two projectors which our eyes catch it in two different perspectives. One of those projectors adduce the scene in the perspective of our left eye and the other does it in the way of our right eye see it. When the rays of those both projectors strike against the screen, we see it with a red color shadow on one side and a blue color shadow on the other side of the original object. When we look at that through 'Anaglyph 3D Glasses', our eyes get the ability to capture those two shadows separately with the use of the two colored glasses of the spectacle. That is also same as when we look at any object in the real world; captures the two sides/ shadows separately with each eye. So the signals or the data go to our brain is same as the signals it gets when we look at a 2D object in the real world. Now the brain compounds those data and creates an image with a perfect depth. Therefore we can see 3D graphics on the screen. 

Yes. I know what you think now. You may think about the spectacles with single colored glasses we get at the 3D cinemas, rather than the spectacles with two colored glasses. Let’s make it clear. In this technology, there is a way of delivering 3D movies by sending light patterns with changed polarization, against the screen. To watch this type movies in 3D, you must use Polarized 3D Glasses. The glasses used in these 3D spectacles have different physical features. Here they use glasses polarized to filter only red and blue colors. Both these 3D glasses; Anaglyph 3D Glasses and 'Polarized 3D Glasses' are under the type of passive glasses. 


Active 3D Spectacles

There are two types of Active 3D Spectacles categorized according to the 3D Technology Techniques used to produce the glasses. 

Liquid Crystal Shutter GlassesDisplay Glasses

In modern 3D televisions and projectors, the Liquid Crystal Shutter technology is being used.

They deliver 3D movies of this type, in a special way. They show the scenes alternately which is needed to be seen by the left eye and the right eye separately. When the scene for the left eye is being delivered, the right eye’ glass get dark/ switched off and when the scene for the right eye is being delivered, the left eye’ glass get dark/ switched off, by activating the electric circuit in this glasses.


3D Without Glasses

Now we know how we experience 3D using 3D Glasses. There is also a way of experiencing 3D without using any glasses. It is called as, ‘Auto-Stere-Oscopic Technology’. All the images strike against this screen in two perspectives. But there is a special parallax barrier in this screen and it acts as a filter. That barrier filters the lights coming to our eyes and send the lights directly to the left and the right eye separately according to the scene. So here also the brain get the image separately by the two eyes and there makes a 3D image in your mind. That’s how you can experience 3D without any glass or any other device. 

3D Technology vs Health

There are many situations where the audience have complained that they felt disgusting or experienced health issues after watching these 3D movies. Therefore, to find out how 3D movies affect the human body and health, Reuters Newspaper Service has done a kind of a research with regarding scientists. Here are the comments made regarding 3D technology by a professor at the Northwestern University in America.

Professor Michael Rosenberg – ‘’While brain is trying to understand this artificial attempt, your eyes get tired and will be unbalanced resulting you a headache’’ 

Also, the sayings of the Consumer Reports had once highlighted a comment regarding 3D technology as follows.

‘’Every 15% of those who watch 3D movies, get a headache and the tired of the eyes after the show-time is over.’’

Professor Deborah Freedman is a professor about eyes & neurology, from the Rochester University in America and once she has told to Roisters News that, by watching 3D movies, our brain gets tired in an unnatural way. Also, she has told that however, the 3D technology copies the real world experience of capturing objects in 3D, it is not 100% the same as in real. Therefore the brain will get brand new images which have never been experienced and when trying to analyze them, the brain wastes the energy on that and it will cause future errors in the brain. 

In this world, nearly everyone can watch 2D movies. (Here, I’m not talking about the blind people) But not all can watch 3D movies. Only 30% of any population can watch 3D movies. The reason is, others have at least one impairment in eyes which doesn’t let them enjoy the 3D experience. 

Future of 3D Technology

The 3D technology is always being developed. When this technology first came to the trend, the main accusation was regarding the color of the movies asking to develop it more forward from the black and white movies which were the only 3D movies there at the moment. By the time, now it has become settled with colorful 3D movies. Also, the quality of the movies is also developed up to where it is now. Now they are working on, improving the 3D glasses for a better experience. Also, you will be able to watch 3D movies in a better quality with improved glasses and with help of better improved 3D projectors. So hope you all found at least one fact about 3D Technology.


Popular Posts