Why Technical English

Biofuels T o d a y

March 16, 2012
Leave a Comment

In today world biofuels steadily attract public attention. Continuing the topic discussed in Biofuels Reduce Emissions (part 1), Biofuels Reduce Emissions (part 2), B i o f u e l s – do they interest you? we present the further technical text on the same theme. The author of the following post Is bioethanol economic fuel? Ing. Jiří Souček, CSc., who participated on biofuel research in the Czech Republic, is responding to the situation with bioethanol in Ukraine, briefly described in the text immediately below the post.

Is bioethanol economic fuel?

By Jiří Souček

Bioethanol is definitely economic fuel in the countries, where it is produced from sugarcane  at price about 4 CZK/L. In the USA bioethanol is mainly made from corn and maize and its production is supported by the State. In the Czech Republic there are 3 large factories producing bioethanol. By the Czech legislation bioethanol is used as a complement to petrol in amount up to 4.2 %. 

In a continuous process, this USI bioethanol p...

Je bioetanol ekonomické palivo?

Jiří Souček

Bioetanol je jednoznačně ekonomické palivo v zemích, kde se vyrábí z cukrové třtiny v ceně asi 4 Kč/l.  V USA je výroba bioetanolu podporována státem a vyrábí se hlavně z obilí a kukuřice. V ČR jsou 3 velké závody na výrobu bioetanolu, který se používá jako přídavek do benzinu v množství 4,2 %, což je stanoveno zákonem.

English: Bio Ethanol on the Way A plant for ma...

Production and usage of biofuels (bioethanol, biodiesel, etc.) is proper:

  1. in the countries with agrarian overproduction;
  2. in the countries where usage of biofuels is compulsory or is subsidised through e.g. reduced or zero VAT.
Biopaliva (bioetanol, biodiesel aj.) je vhodné vyrábět a používat:  

  1. v zemích, kde je nadvýroba zemědělských produktů;
  2. v zemích, kde je povinnost použití biopaliv stanovená zákonem, nebo použití biopaliv dotováno například sníženou nebo nulovou DPH.
The application of biofuels is motivated:  

  1. By effort to reduce greenhouse gases;
  2. By farmland utilization and intensification of employment  in agriculture (development of countryside);
  3. By intention to depress all components of exhaust emissions including particulates and cancerogenic substances;
  4. By endevoir to diminish dependence on fossil fuels import (petroleum, natural gas).

 

Použití biopaliv je motivováno:Deutsch: Variante des Ford Focus Turnier mit B...

  1. Snahou o snížení emisí skleníkových plynů;
  2. Využitím zemědělské půdy a zlepšením zaměstnanosti v zemědělství (rozvoj venkova);
  3. Potřebou snížit exhalace všech složek výfukových plynů včetně kancerogenních látek;
  4. Snažením zmenšit závislost na dovozu fosilních surovin (ropa, zemní plyn).
Technical problems of bioethanol application as   a motor fuel, examined  in the mentioned Ukrainian article, have altogether been solved    as the fuels are widely used in EU countries, the USA, Brazil, etc. for about 20 years. Technické problémy použití bioetanolu jako motorového paliva, uvedené ve zmíněném ukrajinském článku, jsou v podstatě vyřešeny. Bioetanol totiž je ve velkém množství již 20 let používán v zemích EU, USA, Brazílii aj.
In my opinion the biofuels are just a transitional stage in the alternative motor propellants development and the future will belong to electrical motors and biomass as a row material in chemical and other branches of industry. Předpokládám, že biopaliva jsou přechodnou etapou ve vývoji pohonných hmot.  Budoucnost vidím v elektromotorech a využití biomasy jako suroviny v chemickém a jiném průmyslu.
By my calculations expenses on biodiesel production are 1.4 up to 1.8 times higher than those on motor oil. Biodiesel will be an item of competitiveness under present prices if the fuel oil production price increases more than 22 CZK/L (0.9 EUR/L), i.e. a retail price makes about 43 CZK/L (1.7 EUR/L). It corresponds to the petroleum price  about 150 USD/ mil. L.  Dle mých propočtů jsou náklady na biodiesel  přibližně 1,4 až 1,8 vyšší než na motorovou naftu. Biodiesel bude v ČR podle současných cenových relací konkurenceschopný, jestliže výrobní cena nafty vzroste na více než 22 Kč/l (0,9 EUR/l), tj. prodejní maloobchodní cena bude kolem 43 Kč/l (1,7 EUR/l). To odpovídá ceně ropy asi 150 USD/mil. l.

A brief outline of bioethanol perspectives  in Ukraine

Drown up by Galina Vítková using Биоэтанол. Гладко было на бумаге, да забыли про овраги by Andrey Stadnik, BFM Group Ukraine

Stručný přehled situace s bioetanolem na Ukrajině

Vypracovala Galina Vítková podle Andreye Stadnika, BFM Group Ukraine: Биоэтанол. Гладко было на бумаге, да забыли про овраги 

At present biofuels, primarily bioethanol are widely discussed in Ukraine. The public as well as state bodies demonstrate their interest in supporting bioethanol production in spite of arising  obstacles. The Ukrainian Ministry of economy development and trade is preparing the State programme      of stimulating production and application of alternative fuels. Since   January 2012 a range of laws on the same topic  is being developed. Everything is done assuming that bioethanol producers and users should have   some advantages as those in the USA, Brazil and EU countries. V současné době probíhá na Ukrajině hodně diskuzí o biopalivech, především o bioetanolu. Veřejnost a státní orgány projevují zájem výrobu bioetanolu podpořit i přes vyskytující se komplikace. Ministerstvo ekonomického rozvoje a obchodu Ukrajiny připravuje „Státní program stimulování výroby a použití alternativních druhů paliva“. Od ledna 2012 se připravuje řada zákonů na stejné téma. Vychází se z toho, že výrobce a spotřebitelé bioetanolu mají mít určitá zvýhodnění, jak je tomu v USA, Brazílii a zemích EU.
The Ukrainian biofuel market is at its beginnings. Ethyl alcohol or ethanol is produced in a small amount by two factories. Since the complement  of ethyl alcohol to petrol makes up to 10%, this composite fuel has the same VAT as ordinary petrol. Ukrajinský trh s biopalivem je v počátečním stádiu. Etanol vyrábí v malém množství jen dvě továrny. Vzhledem k tomu, že přídavek etanolu do benzinu tvoří až 10%, toto směsné palivo má stejné DPH jako obyčejný benzin.
There are also technical obstacles for massive usage of biofuels, the most important of which are as:

  1. Increase of electric conduction of petrol with bioethanol, which causes larger corrosion of a motor petrol tank, exhaust manifold, seals and other car components.
  2. Another technical problem concerns far higher temperature of bioethanol evaporation, which leads to troubles with firing and running  a motor while cold outdoor.
  3. But the most serious problem is increasing hygroscopicity of petrol with bioethanol, which causes great difficulties with the mixed fuel storing and transporting.       
Existují i technické překážky  pro masové použití biopaliva, z nichž nejdůležitější jsou tyto:

  1. Zvýšení elektrické vodivosti benzinu s bioetanolem, což vede k větší korozi nádrže auta, potrubí, těsnění a ostatního materiálu.
  2.  Dalším technickým problémem je značně vyšší teplota odpařování bioetanolu, což má za následek obtíže při zapalování motoru a rozjezdu auta  za nízkých teplot.
  3. Ale nejzávažnějším problémem je zvýšení hygroskopických vlastností benzinu s bioetanolem, které způsobuje velké nesnáze při  skladování a dopravě tohoto směsného paliva
From the economical viewpoint bioethanol production is characterised in such a way:

  1. Building a factory with productivity less than 60 kilotons (75 mil. L) is economically profitless.   
  2. Bioethanol production depletes the great amount of electricity.       
  3. Serious problems with sale of side products   of  bioethanol manufacture such as Dried Distillers Grains with Solubles (DDGS), carbonic acid gas, etc. also arise.
  4. Another great issue is row materials storing. Bioethanol in Ukraine is produced from corn and maize. The best solution is to buy them in necessary amount closely after picking harvest. For doing it large storage capacities need to be built.  
Podíváme-li se na ekonomickou stránku výroby bioetanolu, zjistíme, že:

  1. Výstavba továrny o výkonu menším než 60 tisíc t (75 mil.l) je ekonomicky nevýhodná.
  2. Výroba bioetanolu vyžaduje velkou spotřebu elektrické energie.
  3. Navíc vznikají problémy s odbytem vedlejších produktů výroby bioetanolu, například, výpalků (DDGS), oxidu uhličitého aj.
  4.  Dalším velkým problémem je skladování surovin. Bioetanol se na Ukrajině vyrábí z kukuřice a obilí. Tyto je nejlépe kupovat v potřebném množství ihned po sklizni úrody. To vyžaduje vybudování velkých skladovacích prostor. 

    Sustainable Feedstocks for Biofuels, Chemicals

Establishment of a vertically integrated holding, which would include all producing procedures  from plants growing up to sale, could be the best solution for these problems. At a rough estimate total expenses on such a holding erection may amount to a milliard EUR.

In author´s opinion such projects cannot be realised in Ukraine at present.  

Optimálním řešením může být vytvoření vertikálně integrovaného holdingu, jehož součástí jsou všechny výrobní procesy pěstováním  rostlin počínaje a odbytem konče. Celkové náklady na vybudování tohoto holdingu mohou odhadem činit až miliardu EUR.  

Podle autora se takovéto projekty nemohou  v současné době na Ukrajině realizovat. 

PS: The whole text of the article Биоэтанол. Гладко было на бумаге, да забыли про овраги is brought at http://www.bfm-ua.com.   PS: Plné znění článku Биоэтанол. Гладко было на бумаге, да забыли про овраги je uvedeno na http://www.bfm-ua.com.

What about you? What is your own opinion on bioethanol?

Write down a comment rather in English , but you may write it in Czech, too.

 A co Vy? Máte svůj vlastni  názor na bioetanol?

 Napište komentář, nejlépe anglicky, ale můžete napsat i česky.

NOTE

  • Kč  =  Czech crown (CZK)
  • DPH  =  VAT (value-added tax)
  • ČR  =  the Czech Republic

 


One way to understand PageRank

February 15, 2012
5 Comments
Dear friend of Technical English,
In the following text I am trying to explain how I understand the topic. After having studied different sources I have drawn up this post.
The post topic is important for every blogger who wants to have a quality blog with quality content which attracts search engines and visitors. On the other hand, it is a great opportunity for writing a lively technical text for studying Tech English online. So, study the topic, study Tech English and write comments, which is the best way for practising the language.
Find necessary terminology in the Internet English Vocabulary.
Galina Vitkova

 

PageRank

Is a link analysis algorithm used by the Google Internet search engine. The algorithm assigns a numerical weighting to each element of hyperlinked documents on the World Wide Web with the purpose of “measuring” its relative importance within it. According to the Google theory if Page A links to Page B, then Page A is saying that Page B is an important page. If a page has more important links to it, then its links to other pages also become more important.

Principles of PageRank

History

PageRank was developed at the Stanford University by Larry Page (thus the term PageRank is after him) and Sergey Brin as part of a research project about a new kind of a search engine. Now the “PageRank” is a trademark of Google. The PageRank process has been patented and assigned to the Stanford University, not to Google. Google has exclusive license rights on this patent from the university. The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for $336 million.
The first paper about the project, describing PageRank and the initial prototype of the Google search engine, was published in 1998: shortly after, Page and Brin founded the company Google Inc. Even if PageRank now is one of about 200 factors that determine the ranking of Google search results, it continues to provide the basis for all of Google web search tools.
Since 1996 a small search engine called “RankDex” designed by Robin Li has already been exploring a similar strategy for site-scoring and page ranking. This technology was patented by 1999 and was used later by Li when he founded Baidu in China.

Some basic information about PageRank

There is some basic information, which is needed to know for understanding PageRank.
First, PageRank is a number that only evaluates the voting ability of all incoming (inbound) links to a page.
Second, every unique page of a site that is indexed in Google has its own PageRank.
Third, internal site links interact in passing PageRank to other pages of the site.
Forth, the PageRank stands on its own. It is not tied in with the anchor text of links.
Fifth, there are two values of the PageRank that should be distinguished:
a. PageRank which you can get from the Internet Explorer toolbar (http://toolbar.google.com);
b. Actual or real PageRank that is used by Google for calculation of ranking web pages.
PageRank from the toolbar (sometimes called the Nominal Pagerank) has value from zero to ten. It is not very accurate information about site pages, but it is the only thing that gives you any idea about the value. It is updated approximately once every three months, more or less, while the real PageRank is calculated permanently as the Google bots crawl the web finding new web pages and new backlinks.
Thus, in the following text the term actual PageRank is employed to deal with the actual PageRank value stored by Google, and the term Toolbar PageRank concerns the evaluation of the value that you see on the Google Toolbar.

This is how the PageRank works.

The Toolbar value is just a representation of the actual PageRank. While real PageRank is linear, Google uses a non-linear graph to show its representation. So on the toolbar, moving from a PageRank of 2 to a PageRank of 3 takes less of an increase than moving from a PageRank of 3 to a PageRank of 4.
This is illustrated by a comparison table (from PageRank Explained by Chris Ridings). The actual figures are kept secret, so for demonstration purposes some guessed figures were used:

If the actual PageRank is between

The Toolbar Shows

0.00000001 and 5
6 and 25
25 and 125
126 and 625
626 and 3125
3126 and 15625
15626 and 78125
78126 and 390625
390626 and 1953125
1953126 and infinity
1
2
3
4
5
6
7
8
9
10

 

The PageRank Algorithm

Lawrence Page and Sergey Brin have published two different versions of their PageRank algorithm in different papers.

First version (so called the Random Surfer Model) was published on the Stanford research paper titled The Anatomy of a Large-Scale Hypertextual Web Search Engine in 1998:

PR(A) = (1-d) + d(PR(T1)/C(T1) + … + PR(Tn)/C(Tn))

Where PR(A) is the PageRank of page A.
d is a damping factor, which is set between 0 and 1, nominally it is set to 0.85.
PR(T1) is the PageRank of a site page pointing to page A.
C(T1) is the number of outgoing links on page T1.

In the second version of the algorithm, the PageRank of page A is given as:

PR(A) = (1-d) / N + d (PR(T1)/C(T1) + … + PR(Tn)/C(Tn))

Where N is the total number of all pages on the Web.

The first model is based on a very simple intuitive concept. The PageRank is put down as a model of user behaviour, where a surfer clicks on links at random. The probability that the surfer visits a page is the page PageRank. The probability that the surfer clicks on one link at the page is given by the number of links at the page. The probability at each page that the surfer will get bored and will jump to another random page is the damping factor d.

The second notation considers PageRank of a page the actual probability for a surfer reaching that page after clicking on many links. The PageRanks then form a probability distribution over web pages, so the sum of all pages PageRanks will be one.

As for calculating PageRank the calculations by means of its first model are easier to compute because the total number of web pages is disregarded.

References:

 

Dear friend of technical English,  

Do you want to improve your professional English?

Do you want at the same time to gain comprehensive information about the Internet and Web?

Subscribe to “Why Technical English”  clicking   RSS – Posts

 


Education: Why blogs need SEO

January 28, 2012
2 Comments

Composed by Galina Vitkova

SEO (Search engine optimisation)

Is the process of improving the visibility of a website or a web page or a blog in search engines. SEO aims to maximise profitable traffic from search engines to websites. In general, the more frequently a site appears in the search results list, the more visitors it will receive from the search engine users. Thus, technical communication among the search engine users, including bloggers will improve. Experience has shown that search engine traffic can make a firm success. Targeted visitors to a website or a blog may provide publicity, revenue, and exposure like no other. Investing in SEO, whether through time or finances, can have an exceptional rate of return.

What really is Search Engine Optimization?

SEO evaluates how search engines work, what people search for, the actual search terms typed into search engines. Moreover, SEO considers which search engines are preferred by their targeted audience. Optimising a website for searching may involve editing its content to increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines (see Search engine essential information).  Promoting a site to increase the number of backlinks, or inbound links, is another SEO task.

The success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search. Otherwise, false search results could turn users to find other search sources. Therefore search engines with more complex ranking algorithms (which strongly affects SEO), taking into account additional factors have been evolved.

Google and its PageRank   

The breakthrough idea behind Google was to analyse the relationships between the websites and pages to determine the relevancy of those pages to specific search queries. The Google founders, graduate students at Stanford University Larry Page and Sergey Brin, used this principle and developed a mathematical algorithm for a search engine to rate the prominence of web pages. The number calculated by the algorithm, has been named PageRank after Larry Page. PageRank estimates the likelihood that a given page will be reached by a user who randomly surfs the web and follows links from one page to another. In effect, this means that some links are stronger than others because a higher PageRank page is more likely to be reached by the random surfer.

Page and Brin founded Google using the developed algorithm for searching in 1998. Google attracted immediately the growing number of Internet users due to its simple design. In Google off-page factors (such as the PageRank and hyperlink analysis) as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) were considered. It enables Google to avoid the kind of manipulation seen in search engines that only deliberated on-page factors for their rankings

Against improper SEO

Since the time of appearing ranking tools webmasters developed a great amount of link building instruments to influence search engine results within SEO. Many sites focused on exchanging, buying, and selling links, often on a massive scale which is not connected much with the spirit of SEO.

By 2004, search engines incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. The leading search engines, Google, Bing, and Yahoo, do not disclose their ranking algorithms, too.  

Image of Google & Yahoo offices in Haifa. Both...

Image via Wikipedia

In 2007, Google announced a campaign against paid links that transfer PageRank. On June 15, 2009, it took special measures to mitigate the effects of PageRank sculpting. In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.

Google Instant, real-time-search, was introduced in late 2009 in an attempt to make search results more timely and relevant. Site administrators have spent months or even years optimising a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website or blog to provide more links to most important pages may improve its visibility. Writing content that includes frequently searched keyword phrases, so as to be relevant to a wide variety of search queries will tend to increase traffic. Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to web page meta data, including the title tag and meta description, will tend to improve the relevancy of a site search listings, thus increasing traffic. Several other techniques can help towards the page link popularity score. In any case, creating a useful, information-rich site, with pages that clearly and accurately describe your content should be one of the main goals of SEO.

References

 

Dear friend of technical English,  

If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, then

subscribe to “Why Technical English”.

Find on the right sidebar subsription options and:

  • Subscribe by Email Sign me up        OR
  • Subsribe with Bloglines              OR
  • Subsribe.ru

Subscribe and and choose free e-books from http://bookgrill.com/?lb.

 

 

Translatorsbase


Search engine – essential information

December 29, 2011
12 Comments
Composed by Galina Vitkova using Wikipedia

A search engine usually refers to searching for information on the Web. Other kinds of the search engine are enterprise search engines, which search on intranets, personal search engines, and mobile search engines. Different selection and relevance criteria may apply in different environments, or for different uses.

Diagram of the search engine concept (en)

Web search engines operate in the following order: 1) Web crawling, 2) Indexing, 3) Searching. Search engines store information about a large number of web pages, which they look up in the Web itself. These pages are retrieved by a Web crawler (sometimes also known as a spider). It is

Architecture of a Web crawler.

 an automated Web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed. Data about web pages are stored in an index database. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages. Other engines, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed. Search engines use regularly updated indexes to operate quickly and efficiently.

When a user makes a query, commonly by giving key words, the search engine looks up the index and provides a listing of best-matching web pages according to its criteria. Usually the listing comprises a short summary containing the document title and sometimes parts of the text. Most search engines support the use of the Boolean terms AND, OR and NOT to further specify the search query. The listing is often sorted with respect to some measure of relevance of the results. An advanced feature is proximity search, which allows users to define the distance between key words.

Most Web search engines are commercial ventures supported by advertising revenue. As a result, some of the engines employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search outcomes. The vast majority of search engines running by private companies use proprietary algorithms and closed databases, though a few of them are open sources.

Nowadays the most popular search engines are as follows:

Google. Around 2001, the Google search engine rose to prominence. Its success was based in part on the concept of link popularity and PageRank. Further it utilizes more than 150 criteria to determine relevancy. Google is currently the most of all used search engine.

Baidu. Due to the difference between Ideographic and Alphabet writing system, the Chinese search market didn’t boom until the introduction of Baidu in 2000. Since then, neither Google, Yahoo nor Microsoft could come to the top like in other part of the world. The reason may be the media control policy of the Chinese government, which requires any network media to filter any possible sensitive information out from their web pages.

Yahoo! Search. Only since 2004, Yahoo! Search has become an original web crawler-based search engine, with a reinvented crawler called Yahoo! Slurp. Its new search engine results were included in all of Yahoo! sites that had a web search function. It also started to sell its search engine results to other companies, to show on their web sites.

After the boom success of key word search engines, such as Google and Yahoo! search, a new type of a search engine, a meta search engine, appears. In general, the meta search engine is not a search engine. Technically, it is a search engine based on search engines. A typical meta search engine accepts user queries the same as that of traditional search engines. But instead of searching key words in its own database, it sends those queries to other non-meta search engines. Then based on the search results returned by several non-meta search engines, it selects the best ones (according on different algorithms), showing back to users. Examples of those meta search engines are Dog Pile (http://www.dogpile.com/) and All in One News (http://www.allinonenews.com/About Allinonenews).

English: Meta search engine Français : metamoteur

PS: The text is drawn up within an upcoming e-book titled Internet English (see Number 33 – WWW, Part 1 / August 2011 – Editorial). G. Vitkova

 

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, then

subscribe to “Why Technical English”.

Find on the right sidebar subsription options and:

  • Subscribe by Email Sign me up        OR
  • Subsribe with Bloglines                   OR
  • Subsribe.ru

 


Website – basic information

November 28, 2011
7 Comments

Website and Its Characteristics

                                                                                             Composed by Galina Vitkova using Wikipedia

A website (or web site) is a collection of web pages, typically common to a particular domain name on the Internet. A web page is a document usually written in HTML (Hyper Text Markup Language), which is almost always accessible via HTTP (Hyper-Text Transport Protocol). HTTP is a protocol that transfers information from the website server to display it in the user’s web browser. All publicly accessible web sites constitute the immense World Wide Web of information. More formally a web site might be considered a collection of pages dedicated to a similar or identical subject or purpose and hosted through a single domain.

The pages of a website are approached from a common root URL (Uniform Resource Locator or Universal Resource Locator) called the homepage, and usually reside on the same physical server. The URLs of the pages organise them into a hierarchy. Nonetheless, the hyperlinks between web pages regulate how the reader perceives the overall structure and how the traffic flows between the different parts of the sites. The first on-line website appeared in 1991 in CERN (European Organization for Nuclear Research situated in the suburbs of Geneva on the Franco–Swiss border) – for more information see ViCTE Newsletter Number 5 – WWW History (Part1) / May 2009, Number 6 – WWW History (Part2) / June 2009.

A website may belong to an individual, a business or other organization. Any website can contain hyperlinks to any other web site, so the differentiation one particular site from another may sometimes be difficult for the user.

Websites are commonly written in, or dynamically converted to, HTML and are accessed using a web browser. Websites can be approached from a number of computer based and Internet enabled devices, including desktop computers, laptops, PDAs (personal digital assistant or personal data assistant) and cell phones.

Website Drafts and Notes

Image by Jayel Aheram via Flickr

A website is hosted on a computer system called a web server or an HTTP server. These terms also refer to the software that runs on the servers and that retrieves and delivers the web pages in response to users´ requests.

Static and dynamic websites are distinguished. A static website is one that has content which is not expected to change frequently and is manually maintained by a person or persons via editor software. It provides the same available standard information to all visitors for a certain period of time between updating of the site.

A dynamic website is one that has frequently changing information or interacts with the user from various situation (HTTP cookies or database variables e.g., previous history, session variables, server side variables, etc.) or direct interaction (form elements, mouseovers, etc.). When the web server receives a request for a given page, the page is automatically retrieved from storage by the software. A site can display the current state of a dialogue between users, can monitor a changing situation, or provide information adapted in some way for the particular user.

Static content may also be dynamically generated either periodically or if certain conditions for regeneration occur in order to avoid the performance loss of initiating the dynamic engine

Website Designer & SEO Company Lexington Devel...
Image by temptrhonda via Flickr

Some websites demand a subscription to access some or all of their content. Examples of subscription websites include numerous business sites, parts of news websites, academic journal websites, gaming websites, social networking sites, websites affording real-time stock market data, websites providing various services (e.g., websites offering storing and/or sharing of images, files, etc.) and many others.

For showing active content of sites or even creating rich internet applications plagins such as Microsoft Silverlight, Adobe Flash, Adobe Shockwave or applets are used. They provide interactivity for the user and real-time updating within web pages (i.e. pages don’t have to be loaded or reloaded to effect any changes), mainly applying the DOM (Document Object Model) and JavaScript.

There are many varieties of websites, each specialising in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include: Affiliate, Archive site, Corporate website, Commerce site, Directory site and many many others (see a detailed classification in Types of websites).

In February 2009, an Internet monitoring company Netcraft, which has tracked web growth since 1995, reported that there were 106,875,138 websites in 2007 and 215,675,903 websites in 2009 with domain names and content on them, compared to just 18,000 Web sites in August 1995.

 PS:  Spellingwhat is the better, what is correct: “website OR “web site?

The form “website” has gradually become the standard spelling. It is used, for instance, by such leading dictionaries and encyclopedias as the Canadian Oxford Dictionary, the Oxford English Dictionary, Wikipedia. Nevertheless, a form “web site” is still widely used, e.g. Encyclopædia Britannica (including its Merriam-Webster subsidiary). Among major Internet technology companies, Microsoft uses “website” and occasionally “web site”, Apple uses “website”, and Google uses “website”, too.

 PSS: Unknown technical terms you can find in the Internet English Vocabulary.

 Reference      Website – Wikipedia, the free encyclopedia

Have You Donated To Wikipedia Already?

Do you use Wikipedia? Do you know that Jimmy Wales, a foundator of Wikipedia, decided to keep Wikipedia advertising free and unbiased. So, they have financial problems with surviving now. Any donation, even a small sum is helpful. Thus, here’s the page where you can donate.

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, subscribe to “Why Technical English”.

Look at the right sidebar and subscribe as you like:

  • by Email subsription … Sign me up        
  • Subsribe with Bloglines
  • Subsribe.ru

Right now within preparing the e-book “Internet English” (see ViCTE Newsletter Number 33 – WWW, Part 1 / August 2011 ) posts on this topic are being published there. Your comments to the posts are welcome.

 Related articles

 


The Semantic Web – great expectations

October 31, 2011
3 Comments

By Galina Vitkova

The Semantic Web brings the further development of the World Wide Web aimed at interpreting the content of the web pages as machine-readable information.

In the classical Web based on HTML web pages the information is comprised in the text or documents which are read and composed into visible or audible for humans web pages by a browser. The Semantic Web is supposed to store information as a semantic network through the use of ontologies. The semantic network is usually a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent relations among the concepts.  An ontology is simply a vocabulary that describes objects and how they relate to one another. So a program-agent is able to mine facts immediately from the Semantic Web and draw logical conclusions based on them. The Semantic Web functions together with the existing Web and uses the protocol HTTP and resource identificators URIs.

The term  Semantic Web was coined by sir Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (W3C) in May 2001 in the journal «Scientific American». Tim Berners-Lee considers the Semantic Web the next step in the developing of the World Wide Web. W3C has adopted and promoted this concept.

Main idea

The Semantic Web is simply a hyper-structure above the existing Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. It is proposed to help computers “read” and use the Web in a more sophisticated way. Metadata can allow more complex, focused Web searches with more accurate results. To paraphrase Tim Berners-Lee the extension will let the Web – currently similar to a giant book – become a giant database. Machine processing of the information in the Semantic Web is enabled by two the most important features of it.

  • First – The all-around application of uniform resource identifiers (URIs), which are known as addresses. Traditionally in the Internet these identifiers are used for pointing hyperlinks to an addressed object (web pages, or e-mail addresses, etc.). In the Semantic Web the URIs are used also for specifying resources, i.e. URI identifies exactly an object. Moreover, in the Semantic Web not only web pages or their parts have URI, but objects of the real world may have URI too (e.g. humans, towns, novel titles, etc.). Furthermore, the abstract resource attribute (e.g. name, position, colour) have their own URI. As the URIs are globally unique they enable to identify the same objects in different places in the Web. Concurrently, URIs of the HTTP protocol (i.e. addresses beginning with http://) can be used as addresses of documents that contain a machine-readable description of these objects.

  • Second – Application of semantic networks and ontologies. Present-day methods of automatic processing information in the Internet are as a rule based on the frequency and lexical analysis or parsing of the text, so it is designated for human perception. In the Semantic Web instead of that the RDF (Resource Description Framework) standard is applied, which uses semantic networks (i.e. graphs, whose vertices and edges have URIs) for representing the information. Statements coded by means of RDF can be further interpreted by ontologies created in compliance with the standards of RDF Schema and OWL (Web Ontology Language) in order to draw logical conclusions. Ontologies are built using so called description logics. Ontologies and schemata help a computer to understand human vocabulary.

 

Semantic Web Technologies

The architecture of the Semantic Web can be represented by the Semantic Web Stack also known as Semantic Web Cake or Semantic Web Layer Cake. The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies, which are standardized for the Semantic Web, are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of the classical hypertext Web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized.

Semantic Web Stack

As shown in the Semantic Web Stack, the following languages or technologies are used to create the Semantic Web. The technologies from the bottom of the stack up to OWL (Web Ontology Langure) are currently standardized and accepted to build Semantic Web applications. It is still not clear how the top of the stack is going to be implemented. All layers of the stack need to be implemented to achieve full visions of the Semantic Web.

  • XML (eXtensible Markup Language) is a set of rules for encoding documents in machine-readable form. It is a markup language like HTML. XML complements (but does not replace) HTML by adding tags that describe data.
  • XML Schema published as a W3C recommendation in May 2001 is one of several XML schema languages. It can be used to express a set of rules to which an XML document must conform in order to be considered ‘valid’.
  • RDF (Resource Description Framework) is a family of W3C specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description of information that is implemented in web resources. RDF does exactly what its name indicates: using XML tags, it provides a framework to describe resources. In RDF terms, everything in the world is a resource. This framework pairs the resource with a specific location in the Web, so the computer knows exactly what the resource is. To do this, RDF uses triples written as XML tags to express this information as a graph. These triples consist of a subject, property and object, which are like the subject, verb and direct object of an English sentence.
  • RDFS (Vocabulary Description Language Schema) provides basic vocabulary for RDF, adds classes, subclasses and properties to resources, creating a basic language framework
  • OWL (Web Ontology Language) is a family of knowledge representation languages for creating ontologies. It extends RDFS being the most complex layer, formalizes ontologies, describes relationships between classes and uses logic to make deductions.
  • SPARQL (Simple Protocol and RDF Query Language) is a RDF query language, which can be used to query any RDF-based data. It enables to retrieve information for semantic web applications.
  • Microdata (HTML)  is an international standard that is applied to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page providing better search results

As mentioned, top layers contain technologies that are not yet standardized or comprise just ideas. May be, the layers Cryptography and Trust are the most uncommon of them. Thus Cryptography ensures and verifies the origin of web statements from a trusted source by a digital signature of RDF statements. Trust to derived statements means that the premises come from the trusted source and that formal logic during deriving new information is reliable.


World Wide Web

September 3, 2011
Leave a Comment

Dear friends of Technical English,

I have just started publishing materials for my projected e-book devoted to the Internet English, i.e. English around the Internet. It means that during a certain period of time I will publish posts which will make basic technical texts in units of the mentioned e-book with a working name Internet English. The draft content of the e-book has already been published on my blog http://traintechenglish.wordpress.com in the newsletter Number 33 – WWW, Part 1 / August 2011. One topic in the list means one unit in the e-book.

Thus you find below the first post of a post series dealing with Internet English. I hope these texts will contribute to develop your professional English and at the same time will bring you topical information about the Internet.    Galina Vitkova

 

World Wide Web

 Composed by Galina Vitkova

The World Wide Web (WWW or simply the Web) is a system of interlinked, hypertext documents that runs over the Internet. A Web browser enables a user to view Web pages that may contain text, images, and other multimedia. Moreover, the browser ensures navigation between the pages using hyperlinks. The Web was created around 1990 by the English Tim Berners-Lee and the Belgian Robert Cailliau working at CERN in Geneva, Switzerland.

Today, the Web and the Internet allow connecti...

Today, the Web and the Internet allow connecti...

The term Web is often mistakenly used as a synonym for the Internet itself, but the Web is a service that operates over the Internet, as e-mail, for example, does. The history of the Internet dates back significantly further than that of the Web.

Basic terms

The World Wide Web is the combination of four basic ideas:

  • The hypertext: a format of information which in a computer environment allows one to move from one part of a document to another or from one document to another through internal connections (called hyperlinks) among these documents;
  • Resource Identifiers: unique identifiers used to locate a particular resource (computer file, document or other resource) on the network – this is commonly known as a URL (Uniform Resource Locator) or URI (Uniform Resource Identifier), although the two have subtle technical differences;
  • The Client-server model of computing: a system in which client software or a client computer makes requests of server software or a server computer that provides the client with resources or services, such as data or files;
  • Markup language: characters or codes embedded in a text, which indicate structure, semantic meaning, or advice on presentation.

 

How the Web works

Viewing a Web page or other resource on the World Wide Web normally begins either by typing the URL of the page into a Web browser, or by following a hypertext link to that page or resource. The act of following hyperlinks from one Web site to another is referred to as browsing or sometimes as surfing the Web. The first step is to resolve the server-name part of the URL into an Internet Protocol address (IP address) by the global, distributed Internet database known as the Domain name system (DNS). The browser then establishes a Transmission Control Protocol (TCP) connection with the server at that IP address.

TCP state diagram

TCP state diagram

The next step is dispatching a HyperText Transfer Protocol (HTTP) request to the Web server in order to require the resource. In the case of a typical Web page, the HyperText Markup Language (HTML) text is first requested and parsed (parsing means a syntactic analysis) by the browser, which then makes additional requests for graphics and any other files that form a part of the page in quick succession. After that the Web browser renders (see a note at the end of this paragraph) the page as described by the HyperText Markup Language (HTML), Cascading Style Sheets (CSS) and other files received, incorporating the images and other resources as necessary. This produces the on-screen page that the viewer sees.

Notes:

  • Rendering is the process of generating an image from a model by means of computer programs.
  • Cascading Style Sheets (CSS) is a style sheet language used to describe the look and formatting of a document written in a markup language.

 

Web standards

At its core, the Web is made up of three standards:

  • the Uniform Resource Identifier (URI), which is a string of characters used to identify a name or a resource on the Internet;
  • the HyperText Transfer Protocol (HTTP), which presents a networking protocol for distributed, collaborative, hypermedia information systems, HTTP is the foundation of data communication on the Web;
  • the HyperText Markup Language (HTML), which is the predominant markup language for web pages. A markup language presents a modern system for annotating a text in a way that is syntactically distinguishable from that text.

 


100% integration of renewable energies?

August 13, 2011
1 Comment

Composed by Galina Vitkova

The Renewables-Grid-Initiative (RGI) promotes effective integration of 100% electricity produced from renewable energy sources.

EnergyGreenSupply

Energy Green Supply

I do not believe in this statement RGI. I am sure that it is impossible from technical and technological points of view. Simply remind the very low share of renewables in entire production of world electricity (3% without hydroelectricity), very high investment costs and very high prices of electricity produced from renewables nowadays.

Concerns about climate and energy security (especially, in the case of nuclear power plants) are reasons supporting the efforts for a quick transformation towards a largely renewable power sector. The European emissions reduction targets to keep temperature increase below 2°C require the power sector to be fully decarbonised by 2050. Large parts of society demand that the decarbonisation is achieved predominantly with renewable energy sources.

Illustration: Different types of renewable energy.

Different types of renewable energy

Renewables advocates do not speak much about real solutions of real greatly complex problems of renewable sources. Very often they are not aware of them. Even if renewable energy technologies are now established and appreciated by officials and green activists as a key means of producing electricity in a climate and environment friendly way, many crucial problems remain unsolved. Additional power lines, which are needed for transporting electricity from new renewable generation sites to users, raise negative impact on the environment, including biodiversity, ecosystems and the landscape. Furthermore, electricity surpluses, produced by renewables when electricity consumption is very low, causes enormous problems with storage of these surpluses. Besides, there are serious problems with dispatch controlling of a power system with the great penetration (see Variability and intermittency of wind energy in Number 31 – Giving a definition / July 2011) of renewables. On the whole, three the most important problems are waiting to be solved and each of them demands massive investments:

  • building the additional electricity transmission lines in a great amount due to numerous and dispersed renewable sites;
  • accommodation of electricity storage needs in the case of electricity surpluses from renewables;
  • integration of intermittent sources of electricity production in scheduled control of power grids.

Thus, concerns about the impacts of renewables integration in European power systems need to be carefully studied, fully understood and addressed.

Let us closely consider the issues of building new transmission lines. In the coming decade thousands of kilometers of new lines should be built acrossEurope. Renewable energy sources are abundant and vary, but they are mostly available in remote areas where demand is low and economic activities infrequent. Therefore, thorough strategic planning is required to realise a new grid infrastructure that meets the electricity needs of the next 50-70 years. The new grid architecture is supposed to enable the integration of all renewable energy sources – independently from where and when they are generated – to expand the possibility for distributed generation and demand-side management.

Grid expansion is inevitable but often controversial. The transmission system operators (TSOs) need to accommodate not only the 2020 targets but also to prepare for the more challenging full decarbonisation of the power sector by 2050. The non-governmental organisations (NGO Global Network) community is still not united with respect to supporting or opposing the grid expansion. A number of technical, environmental and health questions need to be addressed and clarified to improve a shared understanding among and across TSOs and NGOs. RGI is trying to bring together cooperating TSOs and NGOs.

The grid expansion could be accomplished by means of overhead lines and underground cables. Both of them may transmit alternative current (AC) and direct current (DC). In the past it was relatively easy to select between lines and cables:

Cables mainly used in the grid for shorter distances mostly due to being more expensive and shorter technical lifetime (50% of overhead lines) whereas overhead lines were used in another cases. Nowadays the situation is more complex since more options and more parameters should be considered. In the future cables will prospectively be even more utilised as development is going towards higher power levels.

Cables have higher public acceptance because of their lower disturbance of natural scenery, lower electromagnetic radiation, avoidance of wildlife, higher weather tolerance. The overhead lines unfortunately disturb the scenery and seriously influence wildlife and protected areas.

The grid development for expanding the renewables by means of overhead lines endangers bird populations inEurope. High and large-scale bird mortality from aboveground power lines progresses due to:

  • Risk of electrocution,
  • Risk of collision,
  • Negative impacts on habitats.

And that all makes up a significant threat to birds and other wildlife. For these reasons Standards to protect birds (Habitats and Birds Directives) are being worked out. 

Moreover, the European Commission is currently working on a new legislation to ensure that the energy infrastructure needed for implementing the EU climate and energy targets will be built in time.

References


Intermittence of renewables

June 30, 2011
8 Comments

Composed by Galina Vitkova

Everybody knows that renewables are expensive, sometimes very expensive and make electricity price go up. For example, in the Czech Republic the expansion of building solar photovoltaic installations, donated from the state budget, caused increasing electricity price over 12 %. Another example of increasing the costs is given in the table below.

Increase in system operation costs (Euros per MW·h) for 10% and 20% wind share[7]

 

Germany

Denmark

Finland

Norway

Sweden

10%

2.5

0.4

0.3

0.1

0.3

20%

3.2

0.8

1.5

0.3

0.7

Nevertheless, only few people are aware of great intermittence of renewables, which excludes their usage as a main source of electricity generation not only nowadays, but in the future too. Actually no technical and industrial society can exist and develop using unreliable and intermittent power supplies. Nothing in our integrated and automated world works without electricity, this life-blood of technical civilisation. Just imagine what would happen to a society where electricity supply is turned off only for a short time, possibly every week, or if the power is cut for a whole fortnight or more. Life stops, production ceases, chaos sets in. And this is exactly what could arise if we bank on renewables. Thus let us take notice of features specific for wind and solar (photovoltaic) power installations, which are typically built in Europe. 

A straight line projection from where we are t...

Image via Wikipedia

The entire problem with renewables is that they are perilously intermittent power sources. The electricity produced using them is not harmonized with the electrical demand cycle. Renewable based installations generate electricity when the wind blows or the sun shines. Since the energy produced earlier in the day cannot be stored extra generating capacity will have to be brought on-line to cover the deficiency. This means that for every renewable based system installed, a conventional power station will have to be either built or retained to ensure continuity of energy supply. But this power station will have to be up and running all the time (i.e. to be a ’spinning-reserve’) because it takes up to 12 hours to put a power station on-line from a cold start-up. Thusly if we want to keep up continuity of supply the renewable sources result in twice the cost and save very little of fossil fuels.

Wind power is extremely variable. Building thousands of wind turbines still does not resolve the fundamental problem of the enormous wind variability. When days without significant winds occur, it doesn’t matter how many wind turbines are installed as they all go off-line. So, it is extremely difficult to integrate wind power stations into a normal generating grid.  

Solar energy is not available at night and cloudy days, which makes energy storage the most important issue in providing the continuous availability of energy. Off-grid photovoltaic systems traditionally use rechargeable batteries to store excess electricity. With grid-tied systems excess electricity can be sent to the transmission grid and later be settled.

Renewable energy supporters declare that renewable power can somehow be stored to cope with power outages. The first of these energy storage facilities, which comes to aid the thousands of wind-turbines motionless when winds do not blow and solar installations without generating when the sun does not shine, is the pumped water storage system. However, this claim is not well-founded for the following reasons:

  • In most countries of Europe pumped storage systems are already fully used for overpowering variability in electrical demand, and so as a rule they have no extra capacity for overcoming variability in supply due to the unreliable wind and solar generation systems.
  • Pumped storage systems have limited capacity, which can be used for electricity generating  for just a few hours, while wind or solar generation systems can go off-line for days or weeks at a time.
  • Pumped storage systems are not only hugely expensive to construct, the topography of european countries ensure that very few sites are available.

As for flywheel energy storage, compressed air storage, battery storage and hydrogen storage each of these systems is highly complicated, very expensive, hugely inefficient and limited in capacity. The hydrogen storage is especially popular and hyped among proponents of renewables. The hydrogen, produced and stored when renewables generate more electricity than it could be used, is supposed to propel vehicles and generators. Unfortunately these hydrogen powered vehicles and generators are only about 5% efficient. In addition, hydrogen storage vessels are highly flammable and potentially explosive. Practically nowadays there is no energy system available that can remotely be expected to replace renewable energy resources in a large scale, while they are out of functioning.

In numerous publications about renewables we are chiefly informed about expanding and increasing investments in renewables, multiplying their installed capacity and volumes of produced electricity, everything in absolute values, without comparing these indicators with values of other resources, especially when they speak about volumes of production. In the table below you find comparable values of volumes electricity produced by nuclear power plants and renewable installations. Look it through and have your own opinion of the problem.

Comparison of nuclear and renewable electricity producing by top nuclear electricity producers (TW·h-year/% of total electricity production in the country)

 

Country

Year

Nuclear  2007

Wind Power

Solar Power

1 USA 2009

837/19.4%

70.8/1.64%

0.808/0.019%

2 Japan 2008

264/23.5%

1.754/0.156%

0.002/0.000%

3 Russia 2008

160/15.8%

0.007/0.0007%

 

4 Germany 2010

141/22.3%

36.5/5.499%

12.0/1.898%

5 Canada 2008

93/14.6%

2.5/0.392%

0.017/0.003%

Conclusion: Common people must know and must interest about situation in producing and supplying electricity. Only then they will be able to enforce on the governments to make rightdecisions in order to ensure stable supplying electricity, without which modern civilisation cannot exist and improve.

 References:


Study in Ireland

June 12, 2011
Leave a Comment
Dear friends of Technical English,
Here below you find a description of how my former student sees his experience with studying in Ireland. Nowadays there are many opportunities for studying and teaching everywhere across Europe. Learn Technical English and you can get staying at some Europe´s technical university.  Galina Vitkova
 
All Ireland Flag

All Ireland Flag

My study in Ireland

By David Jirovec

I spent 8.5 months (both winter and summer semesters) in Ireland within the EU programme Erasmus. In Cork, Ireland‘s second biggest city, I was studying computer science, the same subject as at the Czech Technical University (CTU) in Prague About studying in Ireland, namely at the Cork Institute of Technology(CIT), it is rather similar to studying at a high school in Bohemia. A student attends his/her class of about 20 participants and these people study nearly all courses together. We were recommended to choose one of these classes and join it. But since I am in my final year at CTU, I couldn’t find any class with suitable combination of courses. So finally, I took each course with a different class. 

Cork City Marathon 2011

Cork City Marathon 2011

These small classes are set for both lectures and labs, so there are no extended lectures for 200 participants as at CTU. Students are never asked to go to and show something at the blackboard to whole class, results of any student’s tests are never shown to other students.

Exams are carried out only in a written form. They take place in very big halls, where students from different courses are present at the same time. Very strict security measures are held there, students cannot take any bags with themselves, it is forbidden to have even a mobile phone there. Exams are easier than at CTU, sometimes it is like choosing 3 questions out of total 5 and answering them, instead of solving all questions. Worse is that there are no 3 free exam attempts as at CTU. If a student fails once, it is possible to try again in the summer, but it costs some euros. There is no a given minimum of points for any test, it is only  necessary to have a sum of at least 40/100 points at the end of a semester for both in semester work and exams. And no compulsory attendance at any classes is required.

Seat of the Rectorate of the Czech Technical U...

Seat of the Rectorate of CTU in Prague

 
Relationships between students and teachers are very good, teachers are friendly and helpful. I had no problems with my English in classes, teachers were easy to understand, but sometimes it was more difficult to understand the students, especially when they were talking to each other. I don’t see much improvement in my English grammar, but my communication skills in English improved much. It was definitely very profitable to use English for all day-to-day tasks and conversation, and observe the little differences between English commonly used in Ireland and English language taught at school in Prague. Irish people speak English, which mostly is just a slang language. So, I recommend anybody who is going to visit Ireland to apply http://www.urbandictionary.com/define.php?term=what%27s+the+craic%3F in order to understand phrases brought about by Celtic community dialects.

PS The ERASMUS Programme – studying in Europe and more – is the EU’s flagship education and training programme enabling 200 000 students to study and work abroad each year. In addition, it funds co-operation between higher education institutions across Europe. The programme not only supports students, but also professors and business staff who want to teach abroad, as well as helping university staff to receive training. European Commission , Education & Training (http://ec.europa.eu/education/lifelong-learning-programme/doc80_en.htm)


« Previous PageNext Page »

    March 2017
    M T W T F S S
    « Jul    
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  

    Blog Stats

    • 202,851 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz