Why Technical English

Who else will discuss PageRank calculations?

April 3, 2012
16 Comments

Composed by Galina Vitkova

Procedure of calculations

In the field of information retrieval on the web, PageRank has emerged as the primary (and most widely discussed) hyperlink analysis algorithm. But how it works still remains an obscurity to many in the SEO online community.

PageRanks-Example

Nevertheless, regarding to the importance of PageRank it worth trying to examine or analyse how it is calculated. The study is meaningful even if Google keeps the real algorithm of PageRank calculations secret.

In any case PageRank calculations are performed in compliance with The PageRank Algorithm

Let us consider the example consisting of  4 pages: Page A, Page B, Page C and Page D (or simply A, B, C, D having their PageRanks with the same notation). The pages link to each other as shown in the following picture. In the beginning the PageRanks for the pages are unknown, so we’ll just assign „1“ to each page.

 Linking of four pages

It means that the first calculation begins with PageRanks as follows:

  A = 1    B = 1    C = 1    D = 1

According to the rules about passing rank, which come out from the mentioned formula, each page passes a part of its PageRank to other pages. So, first we apply the dampening factor “d”, which ensures that a page cannot pass to another page its entire PageRank. Then the remaining value is divided by the number of links outcoming from this page. Finally the entire ranking is summed up and added to each page. In the first table below you see the value of PageRanks passing from one page to another:

A (2 links)  = 1*0.85 / 2 = 0.425
passes  0.425 to B
          0.425 to C
B (1 link)    = 1*0.85  = 0.85
passes  0.85  to  C
C (1 link)    = 1*0.85  = 0.85
passes  0.85  to  A
D (1 link)    = 1*0.85  = 0.85
passes  0.85  to  C

The resulting PageRanks are depicted in the following table below:

A = 1 + 0.85 = 1.85
B = 1 + 0.425 = 1.425
C = 1 + 0.425+0.85+0.85 = 3.125
D = 1

So, the next run of calculations begins with:

A = 1.85    B = 1.425    C = 3.125    D = 1

And after performing the same operations it comes to the result as follows:

A = 4.50625     B = 2.9975    C = 5.18625    D = 1

In practice it is necessary to do identical operations 50 to 100 times to guarantee the sufficient accuracy of the iterations.

Here needful to notice that in the first run of the calculations, Page C increases PageRank of Page A. In the next run Page C gets itself an increase in PageRank that is proportional to the new improved PageRank of Page A. It means Page C gets a proportion of its PageRank back to itself. It is PageRank feedback, an essential part of the way how PageRank works.

Links to and from your site

PageRank is the hardest factor to manipulate when optimising your pages. It is both difficult to achieve and more difficult to catch up with.

GoogleBot-byFML

When trying to optimise your PageRank the following factors should be taken into consideration:

  • Choice of the links you want to link to your site;
  • Selection of a site you want to link out to from your site;
  • Production of maximum PageRank feedback by changes of the internal structure and linkage of your pages.

When looking for links to your site, from a purely PageRank point of view, the pages with the highest Toolbar PageRank seem to be the best solution. Nonetheless, it is not truthful.

As more and more people try and get links from only high PageRank sites, it becomes less and less profitable. Thus sites that need to improve their PageRanks should be more receptive and exchange links with sites that have similar interests. Moreover, the number of links on the page linking to you will alter the amount of feedback, etc.

Therefore, maybe the best solution is getting links from sites that seem appropriate and have good quality, regardless of their current PageRank. The quality sites will either help your PageRank now, or will do so in the future.

To consider the best strategy concerning links out from your site, the general rule is: keep PageRank within your own site. Control of feedback by using the internal pages of your site, is much easier than control with the help of links to external pages. It means to make links out from a page on your site that has a low PageRank itself, and which also contains many internal links. Then, when linking out choose those external sites, which do not point to your page with a significant number of links.  It will get a better increase in PageRank, in particular due to the power of feedback. 

Placing some your links back into your site system rather than letting it go to external links improves PageRanks of your pages. That is why larger sites generally have a better PageRank than smaller ones.

References:

 

Dear friend of technical English,  
Do you want to improve your professional English?
Do you want at the same time to gain comprehensive information about the Internet and Web?

Subscribe to “Why Technical English“ clicking SIGN ME UP at the top of the sidebar 

 

 

Advertisements

Biofuels T o d a y

March 16, 2012
Leave a Comment

In today world biofuels steadily attract public attention. Continuing the topic discussed in Biofuels Reduce Emissions (part 1), Biofuels Reduce Emissions (part 2), B i o f u e l s – do they interest you? we present the further technical text on the same theme. The author of the following post Is bioethanol economic fuel? Ing. Jiří Souček, CSc., who participated on biofuel research in the Czech Republic, is responding to the situation with bioethanol in Ukraine, briefly described in the text immediately below the post.

Is bioethanol economic fuel?

By Jiří Souček

Bioethanol is definitely economic fuel in the countries, where it is produced from sugarcane  at price about 4 CZK/L. In the USA bioethanol is mainly made from corn and maize and its production is supported by the State. In the Czech Republic there are 3 large factories producing bioethanol. By the Czech legislation bioethanol is used as a complement to petrol in amount up to 4.2 %. 

In a continuous process, this USI bioethanol p...

Je bioetanol ekonomické palivo?

Jiří Souček

Bioetanol je jednoznačně ekonomické palivo v zemích, kde se vyrábí z cukrové třtiny v ceně asi 4 Kč/l.  V USA je výroba bioetanolu podporována státem a vyrábí se hlavně z obilí a kukuřice. V ČR jsou 3 velké závody na výrobu bioetanolu, který se používá jako přídavek do benzinu v množství 4,2 %, což je stanoveno zákonem.

English: Bio Ethanol on the Way A plant for ma...

Production and usage of biofuels (bioethanol, biodiesel, etc.) is proper:

  1. in the countries with agrarian overproduction;
  2. in the countries where usage of biofuels is compulsory or is subsidised through e.g. reduced or zero VAT.
Biopaliva (bioetanol, biodiesel aj.) je vhodné vyrábět a používat:  

  1. v zemích, kde je nadvýroba zemědělských produktů;
  2. v zemích, kde je povinnost použití biopaliv stanovená zákonem, nebo použití biopaliv dotováno například sníženou nebo nulovou DPH.
The application of biofuels is motivated:  

  1. By effort to reduce greenhouse gases;
  2. By farmland utilization and intensification of employment  in agriculture (development of countryside);
  3. By intention to depress all components of exhaust emissions including particulates and cancerogenic substances;
  4. By endevoir to diminish dependence on fossil fuels import (petroleum, natural gas).

 

Použití biopaliv je motivováno:Deutsch: Variante des Ford Focus Turnier mit B...

  1. Snahou o snížení emisí skleníkových plynů;
  2. Využitím zemědělské půdy a zlepšením zaměstnanosti v zemědělství (rozvoj venkova);
  3. Potřebou snížit exhalace všech složek výfukových plynů včetně kancerogenních látek;
  4. Snažením zmenšit závislost na dovozu fosilních surovin (ropa, zemní plyn).
Technical problems of bioethanol application as   a motor fuel, examined  in the mentioned Ukrainian article, have altogether been solved    as the fuels are widely used in EU countries, the USA, Brazil, etc. for about 20 years. Technické problémy použití bioetanolu jako motorového paliva, uvedené ve zmíněném ukrajinském článku, jsou v podstatě vyřešeny. Bioetanol totiž je ve velkém množství již 20 let používán v zemích EU, USA, Brazílii aj.
In my opinion the biofuels are just a transitional stage in the alternative motor propellants development and the future will belong to electrical motors and biomass as a row material in chemical and other branches of industry. Předpokládám, že biopaliva jsou přechodnou etapou ve vývoji pohonných hmot.  Budoucnost vidím v elektromotorech a využití biomasy jako suroviny v chemickém a jiném průmyslu.
By my calculations expenses on biodiesel production are 1.4 up to 1.8 times higher than those on motor oil. Biodiesel will be an item of competitiveness under present prices if the fuel oil production price increases more than 22 CZK/L (0.9 EUR/L), i.e. a retail price makes about 43 CZK/L (1.7 EUR/L). It corresponds to the petroleum price  about 150 USD/ mil. L.  Dle mých propočtů jsou náklady na biodiesel  přibližně 1,4 až 1,8 vyšší než na motorovou naftu. Biodiesel bude v ČR podle současných cenových relací konkurenceschopný, jestliže výrobní cena nafty vzroste na více než 22 Kč/l (0,9 EUR/l), tj. prodejní maloobchodní cena bude kolem 43 Kč/l (1,7 EUR/l). To odpovídá ceně ropy asi 150 USD/mil. l.

A brief outline of bioethanol perspectives  in Ukraine

Drown up by Galina Vítková using Биоэтанол. Гладко было на бумаге, да забыли про овраги by Andrey Stadnik, BFM Group Ukraine

Stručný přehled situace s bioetanolem na Ukrajině

Vypracovala Galina Vítková podle Andreye Stadnika, BFM Group Ukraine: Биоэтанол. Гладко было на бумаге, да забыли про овраги 

At present biofuels, primarily bioethanol are widely discussed in Ukraine. The public as well as state bodies demonstrate their interest in supporting bioethanol production in spite of arising  obstacles. The Ukrainian Ministry of economy development and trade is preparing the State programme      of stimulating production and application of alternative fuels. Since   January 2012 a range of laws on the same topic  is being developed. Everything is done assuming that bioethanol producers and users should have   some advantages as those in the USA, Brazil and EU countries. V současné době probíhá na Ukrajině hodně diskuzí o biopalivech, především o bioetanolu. Veřejnost a státní orgány projevují zájem výrobu bioetanolu podpořit i přes vyskytující se komplikace. Ministerstvo ekonomického rozvoje a obchodu Ukrajiny připravuje „Státní program stimulování výroby a použití alternativních druhů paliva“. Od ledna 2012 se připravuje řada zákonů na stejné téma. Vychází se z toho, že výrobce a spotřebitelé bioetanolu mají mít určitá zvýhodnění, jak je tomu v USA, Brazílii a zemích EU.
The Ukrainian biofuel market is at its beginnings. Ethyl alcohol or ethanol is produced in a small amount by two factories. Since the complement  of ethyl alcohol to petrol makes up to 10%, this composite fuel has the same VAT as ordinary petrol. Ukrajinský trh s biopalivem je v počátečním stádiu. Etanol vyrábí v malém množství jen dvě továrny. Vzhledem k tomu, že přídavek etanolu do benzinu tvoří až 10%, toto směsné palivo má stejné DPH jako obyčejný benzin.
There are also technical obstacles for massive usage of biofuels, the most important of which are as:

  1. Increase of electric conduction of petrol with bioethanol, which causes larger corrosion of a motor petrol tank, exhaust manifold, seals and other car components.
  2. Another technical problem concerns far higher temperature of bioethanol evaporation, which leads to troubles with firing and running  a motor while cold outdoor.
  3. But the most serious problem is increasing hygroscopicity of petrol with bioethanol, which causes great difficulties with the mixed fuel storing and transporting.       
Existují i technické překážky  pro masové použití biopaliva, z nichž nejdůležitější jsou tyto:

  1. Zvýšení elektrické vodivosti benzinu s bioetanolem, což vede k větší korozi nádrže auta, potrubí, těsnění a ostatního materiálu.
  2.  Dalším technickým problémem je značně vyšší teplota odpařování bioetanolu, což má za následek obtíže při zapalování motoru a rozjezdu auta  za nízkých teplot.
  3. Ale nejzávažnějším problémem je zvýšení hygroskopických vlastností benzinu s bioetanolem, které způsobuje velké nesnáze při  skladování a dopravě tohoto směsného paliva
From the economical viewpoint bioethanol production is characterised in such a way:

  1. Building a factory with productivity less than 60 kilotons (75 mil. L) is economically profitless.   
  2. Bioethanol production depletes the great amount of electricity.       
  3. Serious problems with sale of side products   of  bioethanol manufacture such as Dried Distillers Grains with Solubles (DDGS), carbonic acid gas, etc. also arise.
  4. Another great issue is row materials storing. Bioethanol in Ukraine is produced from corn and maize. The best solution is to buy them in necessary amount closely after picking harvest. For doing it large storage capacities need to be built.  
Podíváme-li se na ekonomickou stránku výroby bioetanolu, zjistíme, že:

  1. Výstavba továrny o výkonu menším než 60 tisíc t (75 mil.l) je ekonomicky nevýhodná.
  2. Výroba bioetanolu vyžaduje velkou spotřebu elektrické energie.
  3. Navíc vznikají problémy s odbytem vedlejších produktů výroby bioetanolu, například, výpalků (DDGS), oxidu uhličitého aj.
  4.  Dalším velkým problémem je skladování surovin. Bioetanol se na Ukrajině vyrábí z kukuřice a obilí. Tyto je nejlépe kupovat v potřebném množství ihned po sklizni úrody. To vyžaduje vybudování velkých skladovacích prostor. 

    Sustainable Feedstocks for Biofuels, Chemicals

Establishment of a vertically integrated holding, which would include all producing procedures  from plants growing up to sale, could be the best solution for these problems. At a rough estimate total expenses on such a holding erection may amount to a milliard EUR.

In author´s opinion such projects cannot be realised in Ukraine at present.  

Optimálním řešením může být vytvoření vertikálně integrovaného holdingu, jehož součástí jsou všechny výrobní procesy pěstováním  rostlin počínaje a odbytem konče. Celkové náklady na vybudování tohoto holdingu mohou odhadem činit až miliardu EUR.  

Podle autora se takovéto projekty nemohou  v současné době na Ukrajině realizovat. 

PS: The whole text of the article Биоэтанол. Гладко было на бумаге, да забыли про овраги is brought at http://www.bfm-ua.com.   PS: Plné znění článku Биоэтанол. Гладко было на бумаге, да забыли про овраги je uvedeno na http://www.bfm-ua.com.

What about you? What is your own opinion on bioethanol?

Write down a comment rather in English , but you may write it in Czech, too.

 A co Vy? Máte svůj vlastni  názor na bioetanol?

 Napište komentář, nejlépe anglicky, ale můžete napsat i česky.

NOTE

  • Kč  =  Czech crown (CZK)
  • DPH  =  VAT (value-added tax)
  • ČR  =  the Czech Republic

 


One way to understand PageRank

February 15, 2012
5 Comments
Dear friend of Technical English,
In the following text I am trying to explain how I understand the topic. After having studied different sources I have drawn up this post.
The post topic is important for every blogger who wants to have a quality blog with quality content which attracts search engines and visitors. On the other hand, it is a great opportunity for writing a lively technical text for studying Tech English online. So, study the topic, study Tech English and write comments, which is the best way for practising the language.
Find necessary terminology in the Internet English Vocabulary.
Galina Vitkova

 

PageRank

Is a link analysis algorithm used by the Google Internet search engine. The algorithm assigns a numerical weighting to each element of hyperlinked documents on the World Wide Web with the purpose of “measuring” its relative importance within it. According to the Google theory if Page A links to Page B, then Page A is saying that Page B is an important page. If a page has more important links to it, then its links to other pages also become more important.

Principles of PageRank

History

PageRank was developed at the Stanford University by Larry Page (thus the term PageRank is after him) and Sergey Brin as part of a research project about a new kind of a search engine. Now the “PageRank” is a trademark of Google. The PageRank process has been patented and assigned to the Stanford University, not to Google. Google has exclusive license rights on this patent from the university. The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for $336 million.
The first paper about the project, describing PageRank and the initial prototype of the Google search engine, was published in 1998: shortly after, Page and Brin founded the company Google Inc. Even if PageRank now is one of about 200 factors that determine the ranking of Google search results, it continues to provide the basis for all of Google web search tools.
Since 1996 a small search engine called “RankDex” designed by Robin Li has already been exploring a similar strategy for site-scoring and page ranking. This technology was patented by 1999 and was used later by Li when he founded Baidu in China.

Some basic information about PageRank

There is some basic information, which is needed to know for understanding PageRank.
First, PageRank is a number that only evaluates the voting ability of all incoming (inbound) links to a page.
Second, every unique page of a site that is indexed in Google has its own PageRank.
Third, internal site links interact in passing PageRank to other pages of the site.
Forth, the PageRank stands on its own. It is not tied in with the anchor text of links.
Fifth, there are two values of the PageRank that should be distinguished:
a. PageRank which you can get from the Internet Explorer toolbar (http://toolbar.google.com);
b. Actual or real PageRank that is used by Google for calculation of ranking web pages.
PageRank from the toolbar (sometimes called the Nominal Pagerank) has value from zero to ten. It is not very accurate information about site pages, but it is the only thing that gives you any idea about the value. It is updated approximately once every three months, more or less, while the real PageRank is calculated permanently as the Google bots crawl the web finding new web pages and new backlinks.
Thus, in the following text the term actual PageRank is employed to deal with the actual PageRank value stored by Google, and the term Toolbar PageRank concerns the evaluation of the value that you see on the Google Toolbar.

This is how the PageRank works.

The Toolbar value is just a representation of the actual PageRank. While real PageRank is linear, Google uses a non-linear graph to show its representation. So on the toolbar, moving from a PageRank of 2 to a PageRank of 3 takes less of an increase than moving from a PageRank of 3 to a PageRank of 4.
This is illustrated by a comparison table (from PageRank Explained by Chris Ridings). The actual figures are kept secret, so for demonstration purposes some guessed figures were used:

If the actual PageRank is between

The Toolbar Shows

0.00000001 and 5
6 and 25
25 and 125
126 and 625
626 and 3125
3126 and 15625
15626 and 78125
78126 and 390625
390626 and 1953125
1953126 and infinity
1
2
3
4
5
6
7
8
9
10

 

The PageRank Algorithm

Lawrence Page and Sergey Brin have published two different versions of their PageRank algorithm in different papers.

First version (so called the Random Surfer Model) was published on the Stanford research paper titled The Anatomy of a Large-Scale Hypertextual Web Search Engine in 1998:

PR(A) = (1-d) + d(PR(T1)/C(T1) + … + PR(Tn)/C(Tn))

Where PR(A) is the PageRank of page A.
d is a damping factor, which is set between 0 and 1, nominally it is set to 0.85.
PR(T1) is the PageRank of a site page pointing to page A.
C(T1) is the number of outgoing links on page T1.

In the second version of the algorithm, the PageRank of page A is given as:

PR(A) = (1-d) / N + d (PR(T1)/C(T1) + … + PR(Tn)/C(Tn))

Where N is the total number of all pages on the Web.

The first model is based on a very simple intuitive concept. The PageRank is put down as a model of user behaviour, where a surfer clicks on links at random. The probability that the surfer visits a page is the page PageRank. The probability that the surfer clicks on one link at the page is given by the number of links at the page. The probability at each page that the surfer will get bored and will jump to another random page is the damping factor d.

The second notation considers PageRank of a page the actual probability for a surfer reaching that page after clicking on many links. The PageRanks then form a probability distribution over web pages, so the sum of all pages PageRanks will be one.

As for calculating PageRank the calculations by means of its first model are easier to compute because the total number of web pages is disregarded.

References:

 

Dear friend of technical English,  

Do you want to improve your professional English?

Do you want at the same time to gain comprehensive information about the Internet and Web?

Subscribe to “Why Technical English”  clicking   RSS – Posts

 


The Semantic Web – great expectations

October 31, 2011
3 Comments

By Galina Vitkova

The Semantic Web brings the further development of the World Wide Web aimed at interpreting the content of the web pages as machine-readable information.

In the classical Web based on HTML web pages the information is comprised in the text or documents which are read and composed into visible or audible for humans web pages by a browser. The Semantic Web is supposed to store information as a semantic network through the use of ontologies. The semantic network is usually a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent relations among the concepts.  An ontology is simply a vocabulary that describes objects and how they relate to one another. So a program-agent is able to mine facts immediately from the Semantic Web and draw logical conclusions based on them. The Semantic Web functions together with the existing Web and uses the protocol HTTP and resource identificators URIs.

The term  Semantic Web was coined by sir Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (W3C) in May 2001 in the journal «Scientific American». Tim Berners-Lee considers the Semantic Web the next step in the developing of the World Wide Web. W3C has adopted and promoted this concept.

Main idea

The Semantic Web is simply a hyper-structure above the existing Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. It is proposed to help computers “read” and use the Web in a more sophisticated way. Metadata can allow more complex, focused Web searches with more accurate results. To paraphrase Tim Berners-Lee the extension will let the Web – currently similar to a giant book – become a giant database. Machine processing of the information in the Semantic Web is enabled by two the most important features of it.

  • First – The all-around application of uniform resource identifiers (URIs), which are known as addresses. Traditionally in the Internet these identifiers are used for pointing hyperlinks to an addressed object (web pages, or e-mail addresses, etc.). In the Semantic Web the URIs are used also for specifying resources, i.e. URI identifies exactly an object. Moreover, in the Semantic Web not only web pages or their parts have URI, but objects of the real world may have URI too (e.g. humans, towns, novel titles, etc.). Furthermore, the abstract resource attribute (e.g. name, position, colour) have their own URI. As the URIs are globally unique they enable to identify the same objects in different places in the Web. Concurrently, URIs of the HTTP protocol (i.e. addresses beginning with http://) can be used as addresses of documents that contain a machine-readable description of these objects.

  • Second – Application of semantic networks and ontologies. Present-day methods of automatic processing information in the Internet are as a rule based on the frequency and lexical analysis or parsing of the text, so it is designated for human perception. In the Semantic Web instead of that the RDF (Resource Description Framework) standard is applied, which uses semantic networks (i.e. graphs, whose vertices and edges have URIs) for representing the information. Statements coded by means of RDF can be further interpreted by ontologies created in compliance with the standards of RDF Schema and OWL (Web Ontology Language) in order to draw logical conclusions. Ontologies are built using so called description logics. Ontologies and schemata help a computer to understand human vocabulary.

 

Semantic Web Technologies

The architecture of the Semantic Web can be represented by the Semantic Web Stack also known as Semantic Web Cake or Semantic Web Layer Cake. The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies, which are standardized for the Semantic Web, are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of the classical hypertext Web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized.

Semantic Web Stack

As shown in the Semantic Web Stack, the following languages or technologies are used to create the Semantic Web. The technologies from the bottom of the stack up to OWL (Web Ontology Langure) are currently standardized and accepted to build Semantic Web applications. It is still not clear how the top of the stack is going to be implemented. All layers of the stack need to be implemented to achieve full visions of the Semantic Web.

  • XML (eXtensible Markup Language) is a set of rules for encoding documents in machine-readable form. It is a markup language like HTML. XML complements (but does not replace) HTML by adding tags that describe data.
  • XML Schema published as a W3C recommendation in May 2001 is one of several XML schema languages. It can be used to express a set of rules to which an XML document must conform in order to be considered ‘valid’.
  • RDF (Resource Description Framework) is a family of W3C specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description of information that is implemented in web resources. RDF does exactly what its name indicates: using XML tags, it provides a framework to describe resources. In RDF terms, everything in the world is a resource. This framework pairs the resource with a specific location in the Web, so the computer knows exactly what the resource is. To do this, RDF uses triples written as XML tags to express this information as a graph. These triples consist of a subject, property and object, which are like the subject, verb and direct object of an English sentence.
  • RDFS (Vocabulary Description Language Schema) provides basic vocabulary for RDF, adds classes, subclasses and properties to resources, creating a basic language framework
  • OWL (Web Ontology Language) is a family of knowledge representation languages for creating ontologies. It extends RDFS being the most complex layer, formalizes ontologies, describes relationships between classes and uses logic to make deductions.
  • SPARQL (Simple Protocol and RDF Query Language) is a RDF query language, which can be used to query any RDF-based data. It enables to retrieve information for semantic web applications.
  • Microdata (HTML)  is an international standard that is applied to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page providing better search results

As mentioned, top layers contain technologies that are not yet standardized or comprise just ideas. May be, the layers Cryptography and Trust are the most uncommon of them. Thus Cryptography ensures and verifies the origin of web statements from a trusted source by a digital signature of RDF statements. Trust to derived statements means that the premises come from the trusted source and that formal logic during deriving new information is reliable.


Nuclear energy future after Fukushima

March 23, 2011
11 Comments
Composed by Galina Vitkova

What the damage to the Fukushima plant (see picture below) forecasts for Japan—and the world? But first, let us introduce general description of nuclear power stations in order to sense problems caused by the breakdown. 

 

The Fukushima 1 NPP

Image via Wikipedia

 Nuclear fission. Nowadays nuclear power stations generate energy using nuclear fission (Fukushima belongs to this type of nuclear power plants). Atoms of uranium (235) rods in the reactor are split in the process of fission and cause a chain reaction with other nuclei. During this process a large amount of energy is released. The energy heats water to create steam, which rotates a turbine together with a generator, producing electricity.

Depending on the type of fission, presumptions for ensuring supply of the fuel at existing level varies from several decades for the Uranium-235 to thousands of years for uranium-238. At the present rate of use, uranium-235 reserves (as of 2007) will be exhausted in about 70 years. The nuclear industry persuades that the cost of fuel makes a minor cost component for fission power. In future, mining of uranium sources could be more expensive, more difficult. However, increasing the price of uranium would have little brought about the overall cost of nuclear power. For instance, a doubling in the cost of natural uranium would increase the total cost of nuclear power by 5 percent. On the other hand, double increasing of natural gas price results in 60 percent growth of the cost of gas-fired power.

The possibility of nuclear meltdowns and other reactor accidents, such as the Three Mile Island accident and the Chernobyl disaster, have caused much public concern. Nevertheless, coal and hydro- power stations have both accompanied by more deaths per energy unit produced than nuclear power generation.

At present, nuclear energy is in decline, according to a 2007 World Nuclear Industry Status Report presented in the European Parliament. The report outlines that the share of nuclear energy in power production decreased in 21 out of 31 countries, with five fewer functioning nuclear reactors than five years ago. Currently 32 nuclear power plants are under construction or in the pipeline, 20 fewer than at the end of the 1990s.

Fusion. Fusion power could solve many of fission power problems. Nevertheless, despite research started in the 1950s, no commercial fusion reactor is expected before 2050. Many technical problems remain unsolved. Proposed fusion reactors commonly use deuterium and lithium as fuel.  Under assumption that a fusion energy output will be kept in the future, then the known lithium reserves would endure 3000 years, lithium from sea water would endure 60 million years. A more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years.

Due to a joint effort of the European Union (EU), America, China, India, Japan, Russia and South Korea a prototype reactor is being constructed on a site in Cadarache (in France). It is supposed to be put into operation by 2018.

Initial projections in 2006 put its price at €10 billion ($13 billion): €5 billion to build and another €5 billion to run and decommission the thing. Since then construction costs alone have tripled.

As the host, the EU is committed to covering 45% of these, with the other partners contributing about 9% each. In May 2010 the European Commission asked member states to conduce an additional €1.4 billion to cope with the project over to 2013. Member states rejected the request.

Sustainability: The environmental movement emphasizes sustainability of energy use and development. “Sustainability” also refers to the ability of the environment to cope with waste products, especially air pollution.

The long-term radioactive waste storage problems of nuclear power have not been fully solved till now. Several countries use underground repositories. Needless to add nuclear waste takes up little space compared to wastes from the chemical industry which remains toxic indefinitely.

Future of nuclear industry. Let us return to how the damage to the Fukushima plant affects future usage of nuclear power in the future in Japan – and in the world.

Share of nuclear electricity production in total domestic production

Nowadays nuclear plants provide about a third of Japan’s electricity (see chart), Fukushima is not the first to be paralysed by an earthquake. But it is the first to be stricken by the technology dependence on a supply of water for cooling.

The 40-year-old reactors in Fukushima run by the Tokyo Electric Power Company faced a disaster beyond anything their designers were required to imagine.

What of the rest of the world? Nuclear industry supporters had hopes of a nuclear renaissance as countries try to reduce carbon emissions. A boom like that of the 1970s is talked, when 25 or so plants started construction each year in rich countries. Public opinion will surely take a dive. At the least, it will be difficult to find the political will or the money to modernise the West ageing reactors, though without modernisation they will not become safer. The heartless images from Fukushima, and the sense of lurching misfortune, will not be forgotten even if final figures unveil little damage to health. France, which has 58 nuclear reactors, seems to see the disaster in Japan as an opportunity rather than an obstacle for its nuclear industry. On March 14th President Nicolas Sarkozy said that French-built reactors have lost international tenders because they are expensive: “but they are more expensive because they are safer.”

However, the region where nuclear power should grow fastest, and seems to be deterred, is the rest of Asia. Two-thirds of the 62 plants under construction in the world are in Asia. Russia plans another ten. By far the most important arising nuclear power is China, which has 13 working reactors and 27 more on the way. China has announced a pause in nuclear commissioning, and a review. But its leaders know that they must go away from coal: the damage to health from a year of Chinese coal-burning plants is bigger then from nuclear industry. And if anyone can build cheap nuclear plants, it is probably the Chinese.

In case the West turns its back on nuclear power and China holds on, the results could be unfortunate. Nuclear plants need trustworthy and transparent regulation.

  References

  • The risks exposed: What the damage to the Fukushima plant portends for Japan—and the world; The Economist, March 19th 2011
  • Expensive Iteration: A huge international fusion-reactor project faces funding difficulties; The Economist, July 22nd 2010  

 

 


Online game playing

October 25, 2010
3 Comments
By  P. B.

There are a lot of servers on the Internet that provide playing games online. The playing is very easy and many users who have only basic knowledge about computers and the Internet can play these games. The most common way of starting to play is to open the Internet and visit the Google page.   Then in the box for searching write two words: online games and Google immediately offers you many servers, e.g. www.onlinegames.net, www.freeonlinegames.com or Czech pages www.super-games.cz etc. Each server proposes many various games of different sorts. There you may find games for boys, girls, kids, most played games, new games, and others. Or you can select games by a subject, i.e. adventure games, sports games, war games, erotic or strategic games, etc.         

Assigning a path for Leviathan

Image by Alpha Auer, aka. Elif Ayiter via Flickr

Many games have own manual how to play, so the second step is to study the manual. Depending on the subject of a game the user must use, for example, the key Right Arrow to go forward, Left Arrow – to go back, PgUp – to go up, Ctrl – to shoot. It is very easy to understand how to play and recognize what is the goal of the game, e.g. to have maximum points, to kill everything that moves or to be the first in the end. These games are rather simple-minded, but some people become too addicted to them trying to improve their best performance. Sometimes they spend hours before the screen every day and don´t have any idea about time.  

I have tried four different servers and about six different games. In my opinion these games are very easy and for me boring, but for younger users or for people who are bored right now the games can be interesting. However, the most important thing (in my view) is that two of tested servers were infected (my computer warned me that the pages are dangerous and can contain malware, spyware or viruses). My friends, who have problems with their computers in this sense, want me to repair their computer – maybe that is the reason why I don’t like playing games online directly on the Internet.

Quake3 + net_server
Image by [Beta] via Flickr

 

On the other side, I have also tried the game Quake 3 (game demo – not through the Internet, but after installing this game on my computer) and I can affirm that it was  pretty interesting.

 

Quake 3 Arena is a really shooting game. There is no other goal than to kill all other players (but in other versions like Team death match or Capture the flag two teams fight against each other). The player can choose the level of demandingness (from easy to hard) and various places. Quake 3 Arena is the mode where the player fights in the Arena against computer controlled bots (Artificial Intelligent fighters). 

The fighters do battle equipped with various weapons as follows:

  • Gauntlet – a basic weapon for very near fight, usually used only when the player does not have other gun;
  • Machinegun – a thin gun, again applied only when a better gun is not in equipment;
  • Shotgun – a weapon for near fight, 1 shoot every 1 second;
  • Grenade Launcher – shoots grenades;
  • Rocket Launcher – a very popular weapon because its usage is very easy and impact is huge; But the flight of a rocket is slow, so the players get used to shooting at the wall or floor because the rocket has big dispersion;
  • Lighting Gun – an electric gun, very effective because can kill the rival in 2 seconds;
  • Rail gun – a weapon for long distance, very accurate, but has short frequency;
  • Plasma Gun – shoots plasma pulse;
  • BFG10K – the most powerful weapon, but the worst-balanced, and for this reason is not often used by players (BFG = Bio Force Gun).

It is important for the players to find and acquire the armor – the maximum is 200 points armor. The armor provides protection, which absorbs 2/3 of damage. Similarly the players can control their health (from the beginning they have 125 points, which make 100%, and can reach maximum 200 points).

Sometimes (depending on the game) additional features are involved – a Battle suit, Haste (makes movement and shooting twice faster within 30 seconds), Invisibility (for 30 seconds), Medkit, Teleporter (the player is moved to a casual place), Regeneration, Flight (during 60 seconds) and so on.  

  

 


Kernel improvements in Windows 7

March 27, 2010
Leave a Comment
  We continue in discussing Features new to Windows 7. This time some kernel improvements are argued. Join us!

Galina Vitkova

The kernel is a central part of most computer operating systems. That is a component of an operating system which makes a bridge between applications and the actual data processing executed by hardware. The kernel is intended to manage communication between hardware and software components of a computer system. It means the kernel communicates with external devices (Input/Output devices: a keyboard, a mouse, disk drives, printers, displays, etc.), manages internal components (like RAM, CPU, HDD) and operates entire processes. The kernel controls all processes which are starting and running and decides which process will have access to the hardware and for how long.   

    

Fig. 1  (from Wikipedia)

A kernel connects the application software to the hardware of a computer

 

The kernel is a constituent of a series of abstraction layers, each relying on the functions of layers beneath itself. As a basic component of the operating system it merely corresponds to the lowest level of abstraction that is implemented in software. The abstraction layers simplify designing all the software and make its implementation feasible.   

   

Fig. 2 (from Wikipedia)

A typical vision of a computer architecture as a series of abstraction layers: hardware, firmware, assembler, kernel, operating system and applications

Several improvements and additions have been made to Windows 7 (and Server 2008 R2) kernel components, which have increased system performance and enabled more optimal use of available hardware resources. Some of them are as follows:

  • Support for up to 256 logical processors.
  • Introduction of the concept of “timer coalescing (joining)”: Multiple applications or device drivers, which perform actions on a regular basis, can be set to occur at once, instead of each action being performed in accordance with their own schedule.
  • Implementation of Device Containers: Before Windows 7, every device attached to the system has been treated as a single functional end-point, which has a set of capabilities and a “status”. This has been appropriate for single-function devices (such as a keyboard or scanner). But it does not accurately represent multi-function devices such as a combination printer/fax machine/scanner, or web-cams with a built-in microphone. In Windows 7, the drivers and status information for multi-function device can be grouped together as a single “Device Container”. Then this device container is presented to the user in the new “Devices and Printers” Control Panel as a single unit.  
  • Accomplishment of User-Mode Scheduling: The 64-bit versions of Windows 7 and Server 2008 R2 introduce a user-mode scheduling framework. On Microsoft Windows operating systems, scheduling of threads inside a process is handled by the kernel. This is sufficient for most applications. However, applications with large concurrent threading requirements, such as a database server, can profit from having a thread scheduler in-process. It occurs because the kernel no longer needs to be involved in context switches between threads. Due to this innovation threads can be created and destroyed much more quickly when no kernel context switches are required.

For more information about kernel innovations in Windows 7 and more English practice see Core operating system .

Reference: http://en.wikipedia.org/wiki/

 

 


Renewables are becoming more and more popular

January 23, 2010
3 Comments
Composed by Galina Vitkova

REN21 (Renewable Energy Policy Network for the 21st Century) has just issued the newest information about the current renewable policy and its realisation in the form of Renewables Interactive Map (beta-version). The Map can be found on the REN21 website, at http://www.ren21.net/mapThe map contains a great deal of information on renewable energy, including support policies, expansion targets, current shares, installed capacity, current production, future scenarios, policy pledges, etc. It enables you to immediately gain by simple clicking on the country of your interest, depicted on the world geographic map, the current information about:

  • Renewables in General:
  • ♦ Policies (feed-in tariff, investment tax credits, net metering, etc)
  • ♦ Targets (final energy, primary energy, electricity, heating/cooling, etc)
  • ♦ Scenarios (before 2020, after 2020 up to 2050)
  • ♦ Others
  • Statistics (global and for individual participating countries) on geothermal energy, wind energy, solar energy, biofules (mainly ethanol)
  • Information about all kind of renewable which the country concerns (energy of wind, solar, hydro, geothermal, biomass), again for the world and for world regions
  • Technologies in use
  • And others

So the map serves as a central access-point to current renewable energy information, which is very comfortable. Moreover, you will find unknown for you concepts in the glossary accessed from the map.

REN21 has already ensured authentic information for several years, in particular through its Renewables Global Status Report. A new tool, the Renewables Interactive Map is intended to trail more closely the dynamic development of renewable energy production and market development. Furthermore, it provides disaggregated information for particular countries and technologies (see aggregated information on the topic at this blog too, About renewables position just now ).

Studying renewable energy information you improve your technical English, enjoying competent technical texts. Moreover, at the same time you gain very advantageous and comprehensive information about things which we all depend on.

Find below aggregated statistics which denote:

  • Geothermal  energy  (cumulative installed geothermal power capacity in MW)
  • Solar  energy  (cumulative  installed  photovoltaic (PV) power in MW)
  • Wind energy (cumulative  installed capacity  of wind turbines in MW)
  • Fuel ethanol (production in thousand tonnes oil equivalent).

Study the statistics of worldwide renewables adopted from http://www.bp.com/liveassets/bp_internet/globalbp/globalbp_uk_english/reports_and_publications/statistical_energy_review_2008/STAGING/local_assets/2009_downloads/renewables_section_2009.pdf.

Notice the column „Change 08 over 07“.  It demonstrates that in 2008 capacity of renewables installations is increase in comparison with 2007.   For example, production of ethanol in the USA increased by 42.0 % and makes 52.2 % world production of ethanol. In Europe the production increased by 50.8, but makes only 3.8 % world production of this biofuel. Statistics about usage of solar energy usage in Europe are of particular interest. For example, total increase of cumulative  installed  photovoltaic (PV) power counts up to 69.1 %, where Germany increased its solar  installed  PV power by 37.5 %(40,9 % of world total) and Spain had the growth of its solar  installed  PV power by 422.2 % (24.5 % of world total).

Geothermal  energy (MW)

2008

Change 08 over 07
Share of total
Indonesia

1 042.5

6.1 %

10.0 %

Italy

810.5

– 

7.7 %

Japan

537.3

5.1 %

Mexico

964.5

0.5 %

9.2 %

New Zealand

586.6

24.4 %

5.6 %

Philippine

1 780.0

18.9 %

USA

2 998

2.1 %

28.6 %

Total

10 469.0

4.2 %

100 %

  

Solar energy (MW)

  

2008

 

Change 08 over 07
 
Share of total

North America

1 226.7

39.9 %

9.1 %

incl.:  USA

1 172.5

41.2 %

8.7 %

Europe (without Russian Federation)

9 614.9

92.3 %

71.5 %

 incl.: Germany

5 498.0

37.5 %

40.9 %

 incl.:  Spain

3 291.2

422.2 %

24.5 %

Others

2 603.3

25.1 %

19.4 % 

incl.:  Japan

2 148.9

12.0 %

16.0 %

Total

13 444.9

69.1 %

100 %

 

Wind  energy (MW)

2008

Change 08 over 07
Share of total
North America  

27 940

48.6 %

22.9 %

incl.:  USA

25 237

49.5 %

20.7 %

Europe+Euroasia

65 998

68.2 %

54.0 %

incl.:    Germany

23 933

7.4 %

19.6 %

incl.:   Spain

16 543

12.4 %

13.5 %

Asia Pacific

26 446

59.8 %

21.6 %

incl.:  China

12 121

106.3%

9.9 %

incl.:  India

9 655

23.1 %

7.9 %

Total

122 158

29.9 %

100 %

Fuel  ethanol (thousand tonnes)

2008

Change 08 over 07
Share of total

North America

18 154

42.0 %

52.2 %

incl.:  USA

17 460

41.3 %

50.2 %

South America

13 723

19.7 %

39.4 %

incl.:  Brasilia

13 549

20.0 %

38.2 %

Europe

1337

50.8 %

3.8 %

Asia Pacific

1 586

10.4 %

4.6 %

incl.:  China

1 021

– 2.4 %

2.9 %

Total

34 800

30.9 %

100 %

        

Note: About REN21

REN21 (Renewable Energy Policy Network for the 21st Century)  is a global policy network that provides a forum for international leadership on renewable energy. Its goal is to encourage the policy of development and the rapid expansion of renewable energies in developing and industrialised economies.

 

 


Handwriting recognition and Windows 7

December 25, 2009
1 Comment
Compiled by Galina Vitkova

Handwriting recognition concerns the ability of a computer to get and interpret comprehensible handwritten input from paper documents, photographs, touch-screens and other devices. Two varieties of hhandwriting recognition are principally distinguished: off-line and on-line. The image of the written text may be estimated “off line” from a piece of paper by optical scanning through OCR (optical character recognition) or by IWR (intelligent word recognition). As contrasted to “off-line handwriting recognition”, under “on line handwriting recognitiona real-time digitizing tablet is used for input, for example, by a pen-based computer screen surface.

 Off-line recognition

Off-line handwriting recognition involves the automatic conversion of text into letter codes, which are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting.

The technology is successfully used by businesses which process lots of handwritten documents, like insurance companies. The quality of recognition can be substantially increased by structuring the document, for example, by using forms.

The off-line handwriting recognition is relatively difficult because people have different handwriting styles. Nevertheless, limiting the range of input can allow recognition to be improved. For example, the ZIP code digits are generally read by computer to sort the incoming mail.

In optical character recognition (OCR) typewritten or printed text (usually captured by a scanner) is mechanically or electronically conversed into machine-editable text. When one scans a paper page into a computer, the process results in just an image file a photo of the page. Then OCR software converts it into a text or word processor file.

Intelligent Word Recognition, or IWR, is the recognition of unconstrained handwritten words. IWR recognizes entire handwritten words or phrases instead of character-by-character, like OCR. IWR technology matches handwritten or printed words to a user-defined dictionary, It leads to significantly reducing character errors encountered in typical character-based recognition engines. IWR also eliminates a large percentage of the manual data entry of handwritten documents that, in the past, could be detected only by a human.

New technology on the market utilizes IWR, OCR, and ICR (intelligent character recognition, i.e. an advanced optical character recognition) together. For example, most ICR software has a self-learning system referred to as a neural network, which automatically updates the recognition database for new handwriting patterns. All these achievements open many possibilities for the processing of documents, either constrained (hand printed or machine printed) or unconstrained (freeform cursive). Moreover, a complete handwriting recognition system, as a rule, also handles formatting, performs correct segmentation into characters and finds the most plausible words.

 On-line recognition

On-line handwriting recognition involves the automatic conversion of text as it is written on a special digitizer or a personal digital assistant (PDA), which is a mobile device, also known as a palmtop computer. PDA sensor picks up the pen-tip movements as well as pen-up/pen-down switching. The obtained signal is converted into letter codes which are usable within computer and text-processing applications.

The elements of an on-line handwriting recognition interface typically include:

  • A pen or stylus for the user to write with.
  • A touch sensitive surface, which may be integrated with, or adjacent to, an output display.
  • A software application which interprets the movements of the stylus across the writing surface, translating the resulting strokes into digital text.

Commercial products incorporating handwriting recognition as a replacement for keyboard input were introduced in the early 1980s. Since then advancements in electronics have allowed the computing power necessary for handwriting recognition to fit into a smaller form factor than tablet computers, and handwriting recognition is often used as an input method for hand-held PDAs. Modern handwriting recognition systems are often based on Time Delayed Neural Network (TDNN) classifier, nicknamed “Inferno”, built at Microsoft.

In recent years, several attempts were made to produce ink pens that include digital elements, such that a person could write on paper, and have the resulting text stored digitally. The best known of these use technology developed by Anoto (see also Discussion – The Digital Pen), which has had some success in the education market. The general success of these products is yet to be determined. Nevertheless, a number of companies develop software for digital pens based on Anoto technology.

                                                                             

 

 

 

 

 

Handwriting in Windows 7

According to Mountain View, CA, December 1, 2009 – PhatWare Corporation announces the launch of the latest version of PenOffice (PenOffice 3.3), which works with Microsoft Windows 7 and Microsoft Windows Server 2008 R2. PhatWare Corporation is a leading provider of software products and professional services for mobile and desktop computers. Its new product offers customers enhanced security and innovative user interface features. PenOffice 3.3 is an advanced pen-enabled collaboration and handwriting recognition software for Microsoft Windows-based computers. It can be used with any pointing input device, such as graphic tablet, interactive while board, touch screen monitor, Tablet PC, online digital pen, and even standard computer mouse.

In compliance with Stan Miasnikov, president of PhatWare Corp. “Making application compatible with Microsoft Windows 7 and Microsoft Windows Server 2008 R2 helps us offer our customers compelling benefits, including intuitive user interfaces such as pen-based collaboration, improved security and reliability features, full support for multi-core processing, and sophisticated configuration and management features to improve mobile working.”

Although handwriting recognition is an input form that the public has become accustomed to, it has not achieved widespread use in either desktop computers or laptops. It is still generally accepted that keyboard input is both faster and more reliable. As of 2006many PDAs offer handwriting input, sometimes even accepting natural cursive handwriting, but accuracy is still a problem, and some people still find even a simple on-screen keyboard more efficient.

 

Reference: http://en.wikipedia.org/wiki/

                                          


Windows 7

October 28, 2009
5 Comments
                                                        Composed by G. Vitkova using Wikipedia, the free enciclopedia

Windows 7 launched

Windows 7 is the latest version of Microsoft Windows produced for use on home and business desktops, laptops, netbooks, tablet personal computers and media center of personal computers. Windows 7  was released to manufacturing on July 22, 2009. General retail availability was announced on October 22, 2009, less than three years after the release of its predecessor, Windows Vista. 

Unlike Windows Vista, which introduced a large number of new features, Windows 7 is intended to be a more user focused, helpful upgrade to the Windows line. As a result Windows 7 is fully compatible with applications and hardware with which Windows Vista is already compatible. Some applications that have been included with prior releases of Microsoft Windows, including Windows Calendar, Windows Mail, Windows Movie Maker, and Windows Photo, are not involved in Windows 7. Several of them are instead offered separately as a part of the free Windows Live Essential Suite.

Goals

Earlier in 2007 Bill Gates in an interview with Newsweek, insinuated that this version of Windows would “be more user-centric”. Later he added that Windows 7 would also focus on performance improvements. Steven Sinofsky, the new president of the Windows division at Microsoft, responsible for the Windows, Windows Live, and Internet Explorer, afterward expanded on this point in the Engineering Windows 7 blog. He explicated that the company was using a variety of new tracing tools to measure the performance of many areas of the operating systém. The tools help locate inefficient code paths and prevent decrease of performance effectiveness.

The Senior Vice President Bill Veghte stated that Windows Vista users migrating to Windows 7 would not find the kind of device compatibility issues they met migrating from Windows XP. As early as in October 2008, the Microsoft Chief Executive Steve Ballmer confirmed compatibility between Vista and Windows 7, pointing out that Windows 7 would be a refined version of Windows Vista.

New and changed features

Windows 7 includes a number of new features, such as advances in touch and handwriting recognition, support for virtual hard disks, improved performance on multi-core processors, improved boot  performance, Direct Access, and kernel improvements. Windows 7 adds support for systems using multiple heterogeneous graphics cards from different vendors, a new version of Windows Media Center, the XML Paper Specification (XPS) Essentials Pack. Windows Power Shell, and a redesigned Calculator with multiline capabilities. Many new items have been added to the Control Panel, such as the Clear Type Text Tuner, Biometric Devices, System Icons, Display, etc. Windows 7 also supports Mac-like Raw image viewing plus full-size viewing and slideshows in the Windows Photo Viewer and Window Media Center.

Windows 7 includes 13 additional sound schemes, titled Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savanna, and Sonata. A new version of a Windows Virtual PC Beta is available for Windows 7 Professional, Enterprise, and Ultimate editions. It allows multiple Windows environments, including Windows XP Mode, to run on the same machine, requiring the use of Intel Virtualisation Technology for x86 (Intel VT-x) or AMD Virtualisation (AMD-V). Windows XP Mode runs Windows XP in a virtual machine and redirects displayed applications running in Windows XP to the Windows 7 desktop. Furthermore, Windows 7 supports the mounting of a virtual hard disk (VHD) as a normal data storage, and the bootloader delivered with Windows 7 can boot the Windows system from a VHD. The Remote Desktop Protokol (RDP) of Windows 7 is also enhanced to support real-time multimedia applications including video playback and 3D games.

The taskbar has seen the biggest visual changes, where the Quick Launch toolbar has been replaced with pinning applications to the taskbar. Buttons for pinned applications are integrated with the task buttons. The revamped taskbar also allows the reordering of taskbar buttons. To the far right of the system clock there is a small rectangular button that serves as the Show desktop icon. This button is a part of the new feature in Windows 7 called Aero Peek. Hovering over this button makes all visible windows transparent for a quick look at the desktop. In touch-enabled displays such as touch screens, tablet PCs, etc., this button is slightly wider to accommodate being pressed with a finger. Clicking this button minimizes all windows, and clicking it a second time restores them. Additionally, there is a feature named Aero Snap, which automatically maximizes a window when it is dragged to either the top or left/right edges of the screen. This also allows users to snap documents or files on either side of the screen to compare them.

Windows 7 taskbar includes a new networking API – Application Programming Interface for developers. It supports building Simple Object Access Protocol based (SOAP-based) web services in machine code, adds new features to shorten application installing time, reduced User Account Control (UAC) prompts, simplified development of installation packages, and improved worldwide support through a new Extended Linguistic Services API. As early as in 2008 Microsoft announced that colour depths of 30-bit and 48-bit would be supported in Windows 7. The video modes supported in Windows 7 are 16-bit  class=”hiddenSpellError” pre=”16-bit “>sRGB (standard Red Green Blue colour space), 24-bit sRGB, 30-bit sRGB, 30-bit with extended colour gamut sRGB, and 48-bit scRGB. Microsoft is also implementing better support for solid-state drives, so Windows 7 will be able to identify a solid-state drive uniquely. Microsoft is also planning to support USB 3.0 in a subsequent patch, although support would not be included in the initial release because of delays in the finalization of the standard.

Users will also be qualified to disable more Windows components than it was possible in Windows Vista. New additions to this list of components include Internet Explorer, Windows Media Player, Windows Media Center, Windows Search , and the Windows Gadget Platform.

“The launch of Windows 7 has superseded everyone’s expectations, storming ahead of Harry Potter and the Deathly Hallows as the biggest-grossing pre-order product of all-time, and demand is still going strong,” claimed managing director Brian McBride, Amazon UK on October 22, 2009.

References: http://en.wikipedia.org/wiki/Windows_7


Next Page »

    September 2017
    M T W T F S S
    « Jul    
     123
    45678910
    11121314151617
    18192021222324
    252627282930  

    Blog Stats

    • 203,228 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz