Why Technical English

Happy New Year!

January 5, 2013
Leave a Comment

Sob2013

 

I look forward to helping you more improve your Technical English this year.

Galina Vitkova

 


Search engine – essential information

December 29, 2011
12 Comments
Composed by Galina Vitkova using Wikipedia

A search engine usually refers to searching for information on the Web. Other kinds of the search engine are enterprise search engines, which search on intranets, personal search engines, and mobile search engines. Different selection and relevance criteria may apply in different environments, or for different uses.

Diagram of the search engine concept (en)

Web search engines operate in the following order: 1) Web crawling, 2) Indexing, 3) Searching. Search engines store information about a large number of web pages, which they look up in the Web itself. These pages are retrieved by a Web crawler (sometimes also known as a spider). It is

Architecture of a Web crawler.

 an automated Web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed. Data about web pages are stored in an index database. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages. Other engines, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed. Search engines use regularly updated indexes to operate quickly and efficiently.

When a user makes a query, commonly by giving key words, the search engine looks up the index and provides a listing of best-matching web pages according to its criteria. Usually the listing comprises a short summary containing the document title and sometimes parts of the text. Most search engines support the use of the Boolean terms AND, OR and NOT to further specify the search query. The listing is often sorted with respect to some measure of relevance of the results. An advanced feature is proximity search, which allows users to define the distance between key words.

Most Web search engines are commercial ventures supported by advertising revenue. As a result, some of the engines employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search outcomes. The vast majority of search engines running by private companies use proprietary algorithms and closed databases, though a few of them are open sources.

Nowadays the most popular search engines are as follows:

Google. Around 2001, the Google search engine rose to prominence. Its success was based in part on the concept of link popularity and PageRank. Further it utilizes more than 150 criteria to determine relevancy. Google is currently the most of all used search engine.

Baidu. Due to the difference between Ideographic and Alphabet writing system, the Chinese search market didn’t boom until the introduction of Baidu in 2000. Since then, neither Google, Yahoo nor Microsoft could come to the top like in other part of the world. The reason may be the media control policy of the Chinese government, which requires any network media to filter any possible sensitive information out from their web pages.

Yahoo! Search. Only since 2004, Yahoo! Search has become an original web crawler-based search engine, with a reinvented crawler called Yahoo! Slurp. Its new search engine results were included in all of Yahoo! sites that had a web search function. It also started to sell its search engine results to other companies, to show on their web sites.

After the boom success of key word search engines, such as Google and Yahoo! search, a new type of a search engine, a meta search engine, appears. In general, the meta search engine is not a search engine. Technically, it is a search engine based on search engines. A typical meta search engine accepts user queries the same as that of traditional search engines. But instead of searching key words in its own database, it sends those queries to other non-meta search engines. Then based on the search results returned by several non-meta search engines, it selects the best ones (according on different algorithms), showing back to users. Examples of those meta search engines are Dog Pile (http://www.dogpile.com/) and All in One News (http://www.allinonenews.com/About Allinonenews).

English: Meta search engine Français : metamoteur

PS: The text is drawn up within an upcoming e-book titled Internet English (see Number 33 – WWW, Part 1 / August 2011 – Editorial). G. Vitkova

 

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, then

subscribe to “Why Technical English”.

Find on the right sidebar subsription options and:

  • Subscribe by Email Sign me up        OR
  • Subsribe with Bloglines                   OR
  • Subsribe.ru

 


Website – basic information

November 28, 2011
7 Comments

Website and Its Characteristics

                                                                                             Composed by Galina Vitkova using Wikipedia

A website (or web site) is a collection of web pages, typically common to a particular domain name on the Internet. A web page is a document usually written in HTML (Hyper Text Markup Language), which is almost always accessible via HTTP (Hyper-Text Transport Protocol). HTTP is a protocol that transfers information from the website server to display it in the user’s web browser. All publicly accessible web sites constitute the immense World Wide Web of information. More formally a web site might be considered a collection of pages dedicated to a similar or identical subject or purpose and hosted through a single domain.

The pages of a website are approached from a common root URL (Uniform Resource Locator or Universal Resource Locator) called the homepage, and usually reside on the same physical server. The URLs of the pages organise them into a hierarchy. Nonetheless, the hyperlinks between web pages regulate how the reader perceives the overall structure and how the traffic flows between the different parts of the sites. The first on-line website appeared in 1991 in CERN (European Organization for Nuclear Research situated in the suburbs of Geneva on the Franco–Swiss border) – for more information see ViCTE Newsletter Number 5 – WWW History (Part1) / May 2009, Number 6 – WWW History (Part2) / June 2009.

A website may belong to an individual, a business or other organization. Any website can contain hyperlinks to any other web site, so the differentiation one particular site from another may sometimes be difficult for the user.

Websites are commonly written in, or dynamically converted to, HTML and are accessed using a web browser. Websites can be approached from a number of computer based and Internet enabled devices, including desktop computers, laptops, PDAs (personal digital assistant or personal data assistant) and cell phones.

Website Drafts and Notes

Image by Jayel Aheram via Flickr

A website is hosted on a computer system called a web server or an HTTP server. These terms also refer to the software that runs on the servers and that retrieves and delivers the web pages in response to users´ requests.

Static and dynamic websites are distinguished. A static website is one that has content which is not expected to change frequently and is manually maintained by a person or persons via editor software. It provides the same available standard information to all visitors for a certain period of time between updating of the site.

A dynamic website is one that has frequently changing information or interacts with the user from various situation (HTTP cookies or database variables e.g., previous history, session variables, server side variables, etc.) or direct interaction (form elements, mouseovers, etc.). When the web server receives a request for a given page, the page is automatically retrieved from storage by the software. A site can display the current state of a dialogue between users, can monitor a changing situation, or provide information adapted in some way for the particular user.

Static content may also be dynamically generated either periodically or if certain conditions for regeneration occur in order to avoid the performance loss of initiating the dynamic engine

Website Designer & SEO Company Lexington Devel...
Image by temptrhonda via Flickr

Some websites demand a subscription to access some or all of their content. Examples of subscription websites include numerous business sites, parts of news websites, academic journal websites, gaming websites, social networking sites, websites affording real-time stock market data, websites providing various services (e.g., websites offering storing and/or sharing of images, files, etc.) and many others.

For showing active content of sites or even creating rich internet applications plagins such as Microsoft Silverlight, Adobe Flash, Adobe Shockwave or applets are used. They provide interactivity for the user and real-time updating within web pages (i.e. pages don’t have to be loaded or reloaded to effect any changes), mainly applying the DOM (Document Object Model) and JavaScript.

There are many varieties of websites, each specialising in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include: Affiliate, Archive site, Corporate website, Commerce site, Directory site and many many others (see a detailed classification in Types of websites).

In February 2009, an Internet monitoring company Netcraft, which has tracked web growth since 1995, reported that there were 106,875,138 websites in 2007 and 215,675,903 websites in 2009 with domain names and content on them, compared to just 18,000 Web sites in August 1995.

 PS:  Spellingwhat is the better, what is correct: “website OR “web site?

The form “website” has gradually become the standard spelling. It is used, for instance, by such leading dictionaries and encyclopedias as the Canadian Oxford Dictionary, the Oxford English Dictionary, Wikipedia. Nevertheless, a form “web site” is still widely used, e.g. Encyclopædia Britannica (including its Merriam-Webster subsidiary). Among major Internet technology companies, Microsoft uses “website” and occasionally “web site”, Apple uses “website”, and Google uses “website”, too.

 PSS: Unknown technical terms you can find in the Internet English Vocabulary.

 Reference      Website – Wikipedia, the free encyclopedia

Have You Donated To Wikipedia Already?

Do you use Wikipedia? Do you know that Jimmy Wales, a foundator of Wikipedia, decided to keep Wikipedia advertising free and unbiased. So, they have financial problems with surviving now. Any donation, even a small sum is helpful. Thus, here’s the page where you can donate.

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, subscribe to “Why Technical English”.

Look at the right sidebar and subscribe as you like:

  • by Email subsription … Sign me up        
  • Subsribe with Bloglines
  • Subsribe.ru

Right now within preparing the e-book “Internet English” (see ViCTE Newsletter Number 33 – WWW, Part 1 / August 2011 ) posts on this topic are being published there. Your comments to the posts are welcome.

 Related articles

 


The Semantic Web – great expectations

October 31, 2011
3 Comments

By Galina Vitkova

The Semantic Web brings the further development of the World Wide Web aimed at interpreting the content of the web pages as machine-readable information.

In the classical Web based on HTML web pages the information is comprised in the text or documents which are read and composed into visible or audible for humans web pages by a browser. The Semantic Web is supposed to store information as a semantic network through the use of ontologies. The semantic network is usually a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent relations among the concepts.  An ontology is simply a vocabulary that describes objects and how they relate to one another. So a program-agent is able to mine facts immediately from the Semantic Web and draw logical conclusions based on them. The Semantic Web functions together with the existing Web and uses the protocol HTTP and resource identificators URIs.

The term  Semantic Web was coined by sir Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (W3C) in May 2001 in the journal «Scientific American». Tim Berners-Lee considers the Semantic Web the next step in the developing of the World Wide Web. W3C has adopted and promoted this concept.

Main idea

The Semantic Web is simply a hyper-structure above the existing Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. It is proposed to help computers “read” and use the Web in a more sophisticated way. Metadata can allow more complex, focused Web searches with more accurate results. To paraphrase Tim Berners-Lee the extension will let the Web – currently similar to a giant book – become a giant database. Machine processing of the information in the Semantic Web is enabled by two the most important features of it.

  • First – The all-around application of uniform resource identifiers (URIs), which are known as addresses. Traditionally in the Internet these identifiers are used for pointing hyperlinks to an addressed object (web pages, or e-mail addresses, etc.). In the Semantic Web the URIs are used also for specifying resources, i.e. URI identifies exactly an object. Moreover, in the Semantic Web not only web pages or their parts have URI, but objects of the real world may have URI too (e.g. humans, towns, novel titles, etc.). Furthermore, the abstract resource attribute (e.g. name, position, colour) have their own URI. As the URIs are globally unique they enable to identify the same objects in different places in the Web. Concurrently, URIs of the HTTP protocol (i.e. addresses beginning with http://) can be used as addresses of documents that contain a machine-readable description of these objects.

  • Second – Application of semantic networks and ontologies. Present-day methods of automatic processing information in the Internet are as a rule based on the frequency and lexical analysis or parsing of the text, so it is designated for human perception. In the Semantic Web instead of that the RDF (Resource Description Framework) standard is applied, which uses semantic networks (i.e. graphs, whose vertices and edges have URIs) for representing the information. Statements coded by means of RDF can be further interpreted by ontologies created in compliance with the standards of RDF Schema and OWL (Web Ontology Language) in order to draw logical conclusions. Ontologies are built using so called description logics. Ontologies and schemata help a computer to understand human vocabulary.

 

Semantic Web Technologies

The architecture of the Semantic Web can be represented by the Semantic Web Stack also known as Semantic Web Cake or Semantic Web Layer Cake. The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies, which are standardized for the Semantic Web, are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of the classical hypertext Web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized.

Semantic Web Stack

As shown in the Semantic Web Stack, the following languages or technologies are used to create the Semantic Web. The technologies from the bottom of the stack up to OWL (Web Ontology Langure) are currently standardized and accepted to build Semantic Web applications. It is still not clear how the top of the stack is going to be implemented. All layers of the stack need to be implemented to achieve full visions of the Semantic Web.

  • XML (eXtensible Markup Language) is a set of rules for encoding documents in machine-readable form. It is a markup language like HTML. XML complements (but does not replace) HTML by adding tags that describe data.
  • XML Schema published as a W3C recommendation in May 2001 is one of several XML schema languages. It can be used to express a set of rules to which an XML document must conform in order to be considered ‘valid’.
  • RDF (Resource Description Framework) is a family of W3C specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description of information that is implemented in web resources. RDF does exactly what its name indicates: using XML tags, it provides a framework to describe resources. In RDF terms, everything in the world is a resource. This framework pairs the resource with a specific location in the Web, so the computer knows exactly what the resource is. To do this, RDF uses triples written as XML tags to express this information as a graph. These triples consist of a subject, property and object, which are like the subject, verb and direct object of an English sentence.
  • RDFS (Vocabulary Description Language Schema) provides basic vocabulary for RDF, adds classes, subclasses and properties to resources, creating a basic language framework
  • OWL (Web Ontology Language) is a family of knowledge representation languages for creating ontologies. It extends RDFS being the most complex layer, formalizes ontologies, describes relationships between classes and uses logic to make deductions.
  • SPARQL (Simple Protocol and RDF Query Language) is a RDF query language, which can be used to query any RDF-based data. It enables to retrieve information for semantic web applications.
  • Microdata (HTML)  is an international standard that is applied to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page providing better search results

As mentioned, top layers contain technologies that are not yet standardized or comprise just ideas. May be, the layers Cryptography and Trust are the most uncommon of them. Thus Cryptography ensures and verifies the origin of web statements from a trusted source by a digital signature of RDF statements. Trust to derived statements means that the premises come from the trusted source and that formal logic during deriving new information is reliable.


Intermittence of renewables

June 30, 2011
8 Comments

Composed by Galina Vitkova

Everybody knows that renewables are expensive, sometimes very expensive and make electricity price go up. For example, in the Czech Republic the expansion of building solar photovoltaic installations, donated from the state budget, caused increasing electricity price over 12 %. Another example of increasing the costs is given in the table below.

Increase in system operation costs (Euros per MW·h) for 10% and 20% wind share[7]

 

Germany

Denmark

Finland

Norway

Sweden

10%

2.5

0.4

0.3

0.1

0.3

20%

3.2

0.8

1.5

0.3

0.7

Nevertheless, only few people are aware of great intermittence of renewables, which excludes their usage as a main source of electricity generation not only nowadays, but in the future too. Actually no technical and industrial society can exist and develop using unreliable and intermittent power supplies. Nothing in our integrated and automated world works without electricity, this life-blood of technical civilisation. Just imagine what would happen to a society where electricity supply is turned off only for a short time, possibly every week, or if the power is cut for a whole fortnight or more. Life stops, production ceases, chaos sets in. And this is exactly what could arise if we bank on renewables. Thus let us take notice of features specific for wind and solar (photovoltaic) power installations, which are typically built in Europe. 

A straight line projection from where we are t...

Image via Wikipedia

The entire problem with renewables is that they are perilously intermittent power sources. The electricity produced using them is not harmonized with the electrical demand cycle. Renewable based installations generate electricity when the wind blows or the sun shines. Since the energy produced earlier in the day cannot be stored extra generating capacity will have to be brought on-line to cover the deficiency. This means that for every renewable based system installed, a conventional power station will have to be either built or retained to ensure continuity of energy supply. But this power station will have to be up and running all the time (i.e. to be a ’spinning-reserve’) because it takes up to 12 hours to put a power station on-line from a cold start-up. Thusly if we want to keep up continuity of supply the renewable sources result in twice the cost and save very little of fossil fuels.

Wind power is extremely variable. Building thousands of wind turbines still does not resolve the fundamental problem of the enormous wind variability. When days without significant winds occur, it doesn’t matter how many wind turbines are installed as they all go off-line. So, it is extremely difficult to integrate wind power stations into a normal generating grid.  

Solar energy is not available at night and cloudy days, which makes energy storage the most important issue in providing the continuous availability of energy. Off-grid photovoltaic systems traditionally use rechargeable batteries to store excess electricity. With grid-tied systems excess electricity can be sent to the transmission grid and later be settled.

Renewable energy supporters declare that renewable power can somehow be stored to cope with power outages. The first of these energy storage facilities, which comes to aid the thousands of wind-turbines motionless when winds do not blow and solar installations without generating when the sun does not shine, is the pumped water storage system. However, this claim is not well-founded for the following reasons:

  • In most countries of Europe pumped storage systems are already fully used for overpowering variability in electrical demand, and so as a rule they have no extra capacity for overcoming variability in supply due to the unreliable wind and solar generation systems.
  • Pumped storage systems have limited capacity, which can be used for electricity generating  for just a few hours, while wind or solar generation systems can go off-line for days or weeks at a time.
  • Pumped storage systems are not only hugely expensive to construct, the topography of european countries ensure that very few sites are available.

As for flywheel energy storage, compressed air storage, battery storage and hydrogen storage each of these systems is highly complicated, very expensive, hugely inefficient and limited in capacity. The hydrogen storage is especially popular and hyped among proponents of renewables. The hydrogen, produced and stored when renewables generate more electricity than it could be used, is supposed to propel vehicles and generators. Unfortunately these hydrogen powered vehicles and generators are only about 5% efficient. In addition, hydrogen storage vessels are highly flammable and potentially explosive. Practically nowadays there is no energy system available that can remotely be expected to replace renewable energy resources in a large scale, while they are out of functioning.

In numerous publications about renewables we are chiefly informed about expanding and increasing investments in renewables, multiplying their installed capacity and volumes of produced electricity, everything in absolute values, without comparing these indicators with values of other resources, especially when they speak about volumes of production. In the table below you find comparable values of volumes electricity produced by nuclear power plants and renewable installations. Look it through and have your own opinion of the problem.

Comparison of nuclear and renewable electricity producing by top nuclear electricity producers (TW·h-year/% of total electricity production in the country)

 

Country

Year

Nuclear  2007

Wind Power

Solar Power

1 USA 2009

837/19.4%

70.8/1.64%

0.808/0.019%

2 Japan 2008

264/23.5%

1.754/0.156%

0.002/0.000%

3 Russia 2008

160/15.8%

0.007/0.0007%

 

4 Germany 2010

141/22.3%

36.5/5.499%

12.0/1.898%

5 Canada 2008

93/14.6%

2.5/0.392%

0.017/0.003%

Conclusion: Common people must know and must interest about situation in producing and supplying electricity. Only then they will be able to enforce on the governments to make rightdecisions in order to ensure stable supplying electricity, without which modern civilisation cannot exist and improve.

 References:


Fusion reactors in the world

May 10, 2011
Leave a Comment
Implosion of a fusion microcapsule on the NOVA...

Implosion of a fusion microcapsule

Composed by Galina Vitkova

Fusion power is power generated by nuclear fusion processes. In fusion reactions two light atomic nuclei fuse together to form a heavier nucleus. During the process a comparatively large amount of energy is released.

The term “fusion power” is commonly used to refer to potential commercial production of usable power from a fusion source, comparable to the usage of the term “steam power”. Heat from the fusion reactions is utilized to operate a steam turbine which in turn drives electrical generators, similar to the process used in fossil fuel and nuclear fission power stations.

Fusion power has significant safety advantages in comparison with current power stations based on nuclear fission. Fusion only takes place under very limited and controlled conditions So, a failure of precise control or pause of fueling quickly shuts down fusion power reactions. There is no possibility of runaway heat build-up or large-scale release of radioactivity, little or no atmospheric pollution. Furthermore, the power source comprises light elements in small quantities, which are easily obtained and largely harmless to life, the waste products are short-lived in terms of radioactivity. Finally, there is little overlap with nuclear weapons technology.

 

Fusion Power Grid

Fusion Power Grid

 

Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections which were extended by several decades. More than 60 years after the first attempts, commercial fusion power production is still believed to be unlikely before 2040.

The leading designs for controlled fusion research use magnetic (tokamak design) or inertial (laser) confinement of a plasma.

Magnetic confinement of a plasma

The tokamak (see also Number 29 – Easy such and so / April 2011, Nuclear powertokamaks), using magnetic confinement of a plasma, dominates modern research. Very large projects like ITER (see also  The Project ITER – past and present) are expected to pass several important turning points toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful. Unfortunately, previous generations of tokamak machines have revealed new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly. So, ITER is optimistically considered to meet its goals. If successful, ITER would be followed by a “commercial demonstrator” system. The system is supposed to be similar in purpose to the very earliest power-producing fission reactors built in the period before wide-scale commercial deployment of larger machines started in the 1960s and 1970s.

Ultrascale scientific computing, combined with...

Ultrascale scientific computing

 

Stellarators, which also use magnetic confinement of a plasma, are the earliest controlled fusion devices. The stellator was invented by Lyman Spitzer in 1950 and built the next year at what later became the Princeton Plasma Physics Laboratory. The name “stellarator” originates from the possibility of harnessing the power source of the sun, a stellar object.

Stellarators were popular in the 1950s and 60s, but the much better results from tokamak designs led to their falling from favor in the 1970s. More recently, in the 1990s, problems with the tokamak concept have led to renewed interest in the stellarator design, and a number of new devices have been built. Some important modern stellarator experiments are Wendelstein, in Germany, and the Large Helical Device, inJapan.

Inertial confinement fusion

Inertial confinement fusion (ICF) is a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet. The pellets most often contain a mixture of deuterium and tritium.

Inertial confinement fusion

Inertial confinement fusion

 

To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons, almost all ICF devices to date have used lasers. The aim of ICF is to produce a state known as “ignition”, where this heating process causes a chain reaction that burns a significant portion of the fuel. Typical fuel pellets are about the size of a pinhead and contain around 10 milligrams of fuel. In practice, only a small proportion of this fuel will undergo fusion, but if all this fuel were consumed it would release the energy equivalent to burning a barrel of oil.

To date most of the work in ICF has been carried out in Franceand the United States, and generally has seen less development effort than magnetic approaches. Two large projects are currently underway, the Laser Mégajoule in France and the National Ignition Facility in theUnited States.

All functioning fusion reactors are listed in eFusion experimental devices classified by a confinement method.

 Reference: Wikipedia, the free encyclopedia http://en.wikipedia


The Project ITER – past and present

April 30, 2011
Leave a Comment

Composed by Galina Vitkova

 

The logo of the ITER Organization

The logo of the ITER Organization

 

„We firmly believe that to harness fusion energy is the only way to reconcile huge conflicting demands which will confront humanity sooner or later“

Director-General Osamu Motojima,  Opening address, Monaco International ITER Fusion Energy Days, 23 November 2010

 

ITER was originally an acronym for International Thermonuclear Experimental Reactor, but that title was dropped in view of the negatively popular connotation of “thermonuclear“, especially in conjunction with “experimental”. “Iter” also means “journey”, “direction” or “way” in Latin, taking into consideration ITER potential role in harnessing nuclear fusion (see also The ViCTE Newsletter Number 28 – SVOMT revising/March 2011 Nuclear power – fission and fusion) as a peaceful power source.

ITER is a large-scale scientific project intended to prove the practicability of fusion as an energy source, to prove that it can work without negative impact. Moreover, it is expected to collect the data necessary for the design and subsequent operation of the first electricity-producing fusion power plant. Besides, it aims to demonstrate the possibility to produce commercial energy from fusion. ITER is the culmination of decades of fusion research: more than 200 tokamaks (see also The ViCTE Newsletter Number 29 – Easy such and so / April 2011 Nuclear power – tokamaks) built over the world have paved the way to the ITER experiment. ITER is the result of the knowledge and experience these machines have accumulated. ITER, which will be twice the size of the largest tokamak currently operating, is conceived as the necessary experimental step on the way to a demonstration of a fusion power plant potential.

The scientific goal of the ITER project is to deliver ten times the power it consumes. From 50 MW of input power, the ITER machine is designed to produce 500 MW of fusion power – the first of all fusion experiments producing net energy. During its operational lifetime, ITER will test key technologies necessary for the next step, will develop technologies and processes needed for a fusion power plant – including superconducting magnets and remote handling (maintenance by robot). Furthermore, it will verify tritium breeding concepts, will refine neutron shield/heat conversion technology. As a result the ITER project will demonstrate that a fusion power plant is able to capture fusion energy for commercial use.

Launched as an idea for international collaboration in 1985, now the ITER Agreement includes China, the European Union, India, Japan, Korea, Russia and the United States, representing over half of the world’s population. Twenty years of the design work and complex negotiations have been necessary to bring the project to where it is today.

The ITER Agreement was officially signed at theElyséePalaceinParison21 November 2006by Ministers from the seven ITER Members. In a ceremony hosted by French President Jacques Chirac and the President of the European Commission M. José Manuel Durao Barroso, this Agreement established a legal international entity to be responsible for construction, operation, and decommissioning of ITER.

On24 October 2007, after ratification by all Members, the ITER Agreement entered into force and officially established the ITER Organization. ITER was originally expected to cost approximately €5billion. However, the rising price of raw materials and changes to the initial design have augmented that amount more than triple, i.e. to €16billion.

Cost Breakdown of ITER Reactor

Cost Breakdown of ITER Reactor

 

The program is anticipated to last for 30 years – 10 for construction, and 20 of operation. The reactor is expected to take 10 years to build with completion in 2018. The ITER site in Cadarache, France stands ready: in 2010, construction began on the ITER Tokamak and scientific buildings. The seven ITER Members have shared in the design of the installation, the creation of the international project structure, and in its funding.

Key components for the Tokamak will be manufactured in the seven Member States and shipped to Franceby sea. From the port in Berre l’Etang on the Mediterranean, the components will be transported by special convoy along the 104 kilometres of the ITER Itinerary to Cadarache. The exceptional size and weight of certain of the Tokamak components made large-scale public works necessary to widen roads, reinforce bridges and modify intersections. Costs were shared by the Bouches-du-Rhône department Council (79%) and theFrenchState (21%). Work on the Itinerary was completed in December, 2010.

Two trial convoys will be organized in 2011 to put the Itinerary’s resistance and design to the test before a full-scale practice convoy in 2012, and the arrival of the first components for ITER by sea.

Between 2012 and 2017, 200 exceptional convoys will travel by night at reduced speeds along the ITER Itinerary, bypassing 16 villages, negotiating 16 roundabouts, and crossing 35 bridges.

Manufacturing of components for ITER has already begun in Members industries all over the world. So, the level of coordination required for the successful fabrication of over one million parts for the ITER Tokamak alone is daily creating a new model of international scientific collaboration.

ITER, without question, is a very complex project. Building ITER will require a continuous and joint effort involving all partners. In any case, this project remains a challenging task and for most of participants it is a once-in-a-lifetime opportunity to contribute to such a fantastic endeavour.

 

References:


Beware of danger when playing PC games online

October 30, 2010
Leave a Comment
Example of firewall function: Blocking spyware...

Image via Wikipedia

  
By P. B.

The Internet is a place where the user can find a lot of information, entertainment or work, but on the other side, the same user can “catch” viruses, spyware or malware. Many people don´t understand why somebody creates these harmful programs (see the notes below about these programs). However, the answer is easy – similarly in the common life we can meet a lot of people with wicked goals. And gaining money through special programs is an appropriate goal of many Internet thieves. There are various methods how to do it. On the Internet the user may visit some Web pages which contain  viruses or spyware or malware. It can very often happen to the pages with games because games are considered to be typically connected with gamblers and for this reason it can be a source of money.

But the harmful code  may not be only on the Web pages, games themselves can include it.  It means that the player, when downloading some game and installing it on the local computer, also installs the harmful code without any suspicion. It can be very dangerous – one small example. Imagine the user installed the game that involves a so-called keylogger. The key logger is a small program that records stealthily all keys which the user presses. Many antivirus programs consider this software as a virus (usually as a Trojan-horse – see A worm, a virus or a Trojan horse?). So, the key logger writes all pressed keys to the txt file and sends it to the thief´s e-mail. Suppose, after that the user visited his online betting on the http://www.tipsport.cz, where he had to write the text “www.tipsport.cz” following by username “honza” and password “sazeni123”. The key logger put this string of characters in the txt file “www.tipsport.czhonzasazeni123”. The thief received the file, found this text and was very fast able to connect to the honza-account and transferred all the money from Honza´s Internet account to his (thief´s) own account. It was easy, wasn´t it? Of course, the probability of this coincidence is not very high, but who knows.       

Replica of the Trojan Horse in Troy, Turkey

Image by Alaskan Dude via Flickr

 

Notes:

  • Malware means malicious software – authors of malware create programs for harming  other software. Malware includes PS viruses, trojan-horses, spyware and adware.
  • Spyware is a program that uses the Internet for sending data from the computer without awareness of a user of the computer. It differs from the backdoor by a content of sending data, i.e. it sends only statistic data (e.g. overview of visiting pages, installed programs) and can be used for advertising. The spyware is typically widespread in shareware programs and the authors of the shareware know about it and conciliate it because they want to earn money. 
  • Adware, or advertising-supported software, is any software that automatically downloads advertisements to a computer. The goal of the adware is to generate revenue for its author. Adware, by itself, is harmless; but some adware may come with integrated spyware such as keyloggers.

See more in What is Adware, Spyware and Anti-virus?

 

 


Online game playing

October 25, 2010
3 Comments
By  P. B.

There are a lot of servers on the Internet that provide playing games online. The playing is very easy and many users who have only basic knowledge about computers and the Internet can play these games. The most common way of starting to play is to open the Internet and visit the Google page.   Then in the box for searching write two words: online games and Google immediately offers you many servers, e.g. www.onlinegames.net, www.freeonlinegames.com or Czech pages www.super-games.cz etc. Each server proposes many various games of different sorts. There you may find games for boys, girls, kids, most played games, new games, and others. Or you can select games by a subject, i.e. adventure games, sports games, war games, erotic or strategic games, etc.         

Assigning a path for Leviathan

Image by Alpha Auer, aka. Elif Ayiter via Flickr

Many games have own manual how to play, so the second step is to study the manual. Depending on the subject of a game the user must use, for example, the key Right Arrow to go forward, Left Arrow – to go back, PgUp – to go up, Ctrl – to shoot. It is very easy to understand how to play and recognize what is the goal of the game, e.g. to have maximum points, to kill everything that moves or to be the first in the end. These games are rather simple-minded, but some people become too addicted to them trying to improve their best performance. Sometimes they spend hours before the screen every day and don´t have any idea about time.  

I have tried four different servers and about six different games. In my opinion these games are very easy and for me boring, but for younger users or for people who are bored right now the games can be interesting. However, the most important thing (in my view) is that two of tested servers were infected (my computer warned me that the pages are dangerous and can contain malware, spyware or viruses). My friends, who have problems with their computers in this sense, want me to repair their computer – maybe that is the reason why I don’t like playing games online directly on the Internet.

Quake3 + net_server
Image by [Beta] via Flickr

 

On the other side, I have also tried the game Quake 3 (game demo – not through the Internet, but after installing this game on my computer) and I can affirm that it was  pretty interesting.

 

Quake 3 Arena is a really shooting game. There is no other goal than to kill all other players (but in other versions like Team death match or Capture the flag two teams fight against each other). The player can choose the level of demandingness (from easy to hard) and various places. Quake 3 Arena is the mode where the player fights in the Arena against computer controlled bots (Artificial Intelligent fighters). 

The fighters do battle equipped with various weapons as follows:

  • Gauntlet – a basic weapon for very near fight, usually used only when the player does not have other gun;
  • Machinegun – a thin gun, again applied only when a better gun is not in equipment;
  • Shotgun – a weapon for near fight, 1 shoot every 1 second;
  • Grenade Launcher – shoots grenades;
  • Rocket Launcher – a very popular weapon because its usage is very easy and impact is huge; But the flight of a rocket is slow, so the players get used to shooting at the wall or floor because the rocket has big dispersion;
  • Lighting Gun – an electric gun, very effective because can kill the rival in 2 seconds;
  • Rail gun – a weapon for long distance, very accurate, but has short frequency;
  • Plasma Gun – shoots plasma pulse;
  • BFG10K – the most powerful weapon, but the worst-balanced, and for this reason is not often used by players (BFG = Bio Force Gun).

It is important for the players to find and acquire the armor – the maximum is 200 points armor. The armor provides protection, which absorbs 2/3 of damage. Similarly the players can control their health (from the beginning they have 125 points, which make 100%, and can reach maximum 200 points).

Sometimes (depending on the game) additional features are involved – a Battle suit, Haste (makes movement and shooting twice faster within 30 seconds), Invisibility (for 30 seconds), Medkit, Teleporter (the player is moved to a casual place), Regeneration, Flight (during 60 seconds) and so on.  

  

 


Video Games Platforms

October 6, 2010
Leave a Comment
  
Composed by Galina Vitkova

 

Terminology

The term game platform refers to the particular combination of electronic or computer hardware which, in connection with low-level software, allows a video game to run. In general, a hardware platform means a group of compatible computers that can run the same software. A software platform comprises a major piece of software, as an operating system, operating environment, or a database, under which various smaller application programs can be designed to run. Below main platforms of video games are reviewed.   

  

Platforms for PC games 

PC games often require specialized hardware in the user’s computer in order to play, such as a specific generation of graphics processing unit or an Internet connection for online play, although these system requirements vary from game to game. In any case your PC hardware capabilities should meet minimum hardware requirements established for particular PC games. On the other side, many modern computer games allow, or even require, the player to use a keyboard and mouse simultaneously without demanding any additional devices. 

As of the 2000s, PC games are often regarded as offering a deeper and more complex experience than console games. 

 

Video game consoles platform

A video game console is an interactive entertainment computer or modified computer system that produces a video display signal which can be used with a display device to show video games.    

Usually, this system is connected to a common television set or composite video monitor. A composite monitor is any analog video display that receives input in the form of an analog composite video signal through a single cable. The monitor is different from a conventional TV set because it does not have an internal RF (Radio Frequency) tuner or RF converter. However, a user can install an external device that emulates a TV tuner. 

  

Handheld game consoles platform

A handheld game console is a lightweight, portable electronic device of a small size with a built-in screen, games controls and speakers. A small size allows people to carry handheld game consoles and play games at any time or place. 

A One Station handheld console with game

Image via Wikipedia

 The oldest true handheld game console with interchangeable cartridges is the Milton Bradley Microvision issued in 1979. 

Nintendo, with a popular handheld console concept released the Game Boy in 1989, and continues to dominate the handheld console market with successive Game Boy, and most recently Nintendo DS models.  

  

Handheld electronic games platform

In the past decade, handheld video games have currently become a major sector of the video game market. For example, in 2004 sales of portable software titles exceeded $1 billion in the United States. 

The Gizmondo handheld video game unit. United ...

Image via Wikipedia

Handheld electronic games are very small portable devices for playing interactive electronic games, often miniaturized versions of video games. The controls, display and speakers are all a part of a single unit. They usually have displays designed to play one game. Due to this simplicity they can be made as small as a digital watch, and sometimes are. Usually they do not have interchangeable cartridges, disks, etc., or are not reprogrammable.  The visual output of these games can range from a few small light bulbs or a light-emitting diode (LED) lights to calculator-like alphanumerical screens. Nowadays these outputs are mostly displaced by liquid crystal and Vacuum fluorescent display screens. Handhelds were most popular from the late 1970s into the early 1990s. They are both the precursors and inexpensive alternatives to the handheld game console. 

Mobile games platform

A mobile game is a video game played on a mobile phone, smartphone, PDA (Personal Digital Assistant), handheld computer or portable media player.  

The 16 best iPhone games of 2009

Image by docpop via Flickr

The first game that was pre-installed onto a mobile phone was Snake on selected Nokia models in 1997. Snake and its variants have since become the most-played video game on the planet, with over a billion people having played the game. Mobile games are played using the technologies present on the device itself. The games may be installed over the air, they may be side loaded onto the handset with a cable, or they may be embedded on the handheld devices by the original equipment manufacturer (OEM) or by the mobile operator. 

For networked games, there are various technologies in common use, for example, text message (SMS), multimedia message (MMS) or GPRS location identification. 

  

Arcade games 

The Simpsons arcade game by Konami

Image by Lost Tulsa via Flickr

An Arcade game is a coin-operated entertainment machine, usually installed in public businesses such as restaurants, public houses, and video arcades. Most arcade games are redemption games, merchandisers (such as claw crane), video games, or pinball machines. The golden age of video arcade games within the early 1980s was a peak era of video arcade game popularity, innovation, and earnings.     

Furthermore, by the late 1990s and early 2000s, networked gaming via console and computers across the Internet had appeared and replaced arcade games. The arcades also lost their a forefront position of the of new game releases. Having the choice between playing a game at an arcade three or four times (perhaps 15 minutes of play for a typical arcade game), and renting, at about the same price, the exact same game for a video game console, people selected the console. To remain viable, arcades added other elements to complement the video games such as redemption games, merchandisers, games that use special controllers largely inaccessible to home users. Besides, they equiped games with  reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. Moreover, today arcades extended their activities by food service etc. striving to become “fun centers” or “family fun centers”. 

All modern arcade games use solid state electronics and integrated circuits. In the past coin-operated arcade video games generally used custom per-game hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in computer graphics display technology. Recent arcade game hardware is often based on modified video game console hardware or high-end PC components.

References:   http://en.wikipedia.org/

 

 


Next Page »

    June 2017
    M T W T F S S
    « Jul    
     1234
    567891011
    12131415161718
    19202122232425
    2627282930  

    Blog Stats

    • 203,033 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz