Why Technical English

Happy New Year!

January 5, 2013
Leave a Comment

Sob2013

 

I look forward to helping you more improve your Technical English this year.

Galina Vitkova

 


Search engine – essential information

December 29, 2011
12 Comments
Composed by Galina Vitkova using Wikipedia

A search engine usually refers to searching for information on the Web. Other kinds of the search engine are enterprise search engines, which search on intranets, personal search engines, and mobile search engines. Different selection and relevance criteria may apply in different environments, or for different uses.

Diagram of the search engine concept (en)

Web search engines operate in the following order: 1) Web crawling, 2) Indexing, 3) Searching. Search engines store information about a large number of web pages, which they look up in the Web itself. These pages are retrieved by a Web crawler (sometimes also known as a spider). It is

Architecture of a Web crawler.

 an automated Web browser which follows every link it sees. The contents of each page are then analyzed to determine how it should be indexed. Data about web pages are stored in an index database. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages. Other engines, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed. Search engines use regularly updated indexes to operate quickly and efficiently.

When a user makes a query, commonly by giving key words, the search engine looks up the index and provides a listing of best-matching web pages according to its criteria. Usually the listing comprises a short summary containing the document title and sometimes parts of the text. Most search engines support the use of the Boolean terms AND, OR and NOT to further specify the search query. The listing is often sorted with respect to some measure of relevance of the results. An advanced feature is proximity search, which allows users to define the distance between key words.

Most Web search engines are commercial ventures supported by advertising revenue. As a result, some of the engines employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search outcomes. The vast majority of search engines running by private companies use proprietary algorithms and closed databases, though a few of them are open sources.

Nowadays the most popular search engines are as follows:

Google. Around 2001, the Google search engine rose to prominence. Its success was based in part on the concept of link popularity and PageRank. Further it utilizes more than 150 criteria to determine relevancy. Google is currently the most of all used search engine.

Baidu. Due to the difference between Ideographic and Alphabet writing system, the Chinese search market didn’t boom until the introduction of Baidu in 2000. Since then, neither Google, Yahoo nor Microsoft could come to the top like in other part of the world. The reason may be the media control policy of the Chinese government, which requires any network media to filter any possible sensitive information out from their web pages.

Yahoo! Search. Only since 2004, Yahoo! Search has become an original web crawler-based search engine, with a reinvented crawler called Yahoo! Slurp. Its new search engine results were included in all of Yahoo! sites that had a web search function. It also started to sell its search engine results to other companies, to show on their web sites.

After the boom success of key word search engines, such as Google and Yahoo! search, a new type of a search engine, a meta search engine, appears. In general, the meta search engine is not a search engine. Technically, it is a search engine based on search engines. A typical meta search engine accepts user queries the same as that of traditional search engines. But instead of searching key words in its own database, it sends those queries to other non-meta search engines. Then based on the search results returned by several non-meta search engines, it selects the best ones (according on different algorithms), showing back to users. Examples of those meta search engines are Dog Pile (http://www.dogpile.com/) and All in One News (http://www.allinonenews.com/About Allinonenews).

English: Meta search engine Français : metamoteur

PS: The text is drawn up within an upcoming e-book titled Internet English (see Number 33 – WWW, Part 1 / August 2011 – Editorial). G. Vitkova

 

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, then

subscribe to “Why Technical English”.

Find on the right sidebar subsription options and:

  • Subscribe by Email Sign me up        OR
  • Subsribe with Bloglines                   OR
  • Subsribe.ru

 


Website – basic information

November 28, 2011
7 Comments

Website and Its Characteristics

                                                                                             Composed by Galina Vitkova using Wikipedia

A website (or web site) is a collection of web pages, typically common to a particular domain name on the Internet. A web page is a document usually written in HTML (Hyper Text Markup Language), which is almost always accessible via HTTP (Hyper-Text Transport Protocol). HTTP is a protocol that transfers information from the website server to display it in the user’s web browser. All publicly accessible web sites constitute the immense World Wide Web of information. More formally a web site might be considered a collection of pages dedicated to a similar or identical subject or purpose and hosted through a single domain.

The pages of a website are approached from a common root URL (Uniform Resource Locator or Universal Resource Locator) called the homepage, and usually reside on the same physical server. The URLs of the pages organise them into a hierarchy. Nonetheless, the hyperlinks between web pages regulate how the reader perceives the overall structure and how the traffic flows between the different parts of the sites. The first on-line website appeared in 1991 in CERN (European Organization for Nuclear Research situated in the suburbs of Geneva on the Franco–Swiss border) – for more information see ViCTE Newsletter Number 5 – WWW History (Part1) / May 2009, Number 6 – WWW History (Part2) / June 2009.

A website may belong to an individual, a business or other organization. Any website can contain hyperlinks to any other web site, so the differentiation one particular site from another may sometimes be difficult for the user.

Websites are commonly written in, or dynamically converted to, HTML and are accessed using a web browser. Websites can be approached from a number of computer based and Internet enabled devices, including desktop computers, laptops, PDAs (personal digital assistant or personal data assistant) and cell phones.

Website Drafts and Notes

Image by Jayel Aheram via Flickr

A website is hosted on a computer system called a web server or an HTTP server. These terms also refer to the software that runs on the servers and that retrieves and delivers the web pages in response to users´ requests.

Static and dynamic websites are distinguished. A static website is one that has content which is not expected to change frequently and is manually maintained by a person or persons via editor software. It provides the same available standard information to all visitors for a certain period of time between updating of the site.

A dynamic website is one that has frequently changing information or interacts with the user from various situation (HTTP cookies or database variables e.g., previous history, session variables, server side variables, etc.) or direct interaction (form elements, mouseovers, etc.). When the web server receives a request for a given page, the page is automatically retrieved from storage by the software. A site can display the current state of a dialogue between users, can monitor a changing situation, or provide information adapted in some way for the particular user.

Static content may also be dynamically generated either periodically or if certain conditions for regeneration occur in order to avoid the performance loss of initiating the dynamic engine

Website Designer & SEO Company Lexington Devel...
Image by temptrhonda via Flickr

Some websites demand a subscription to access some or all of their content. Examples of subscription websites include numerous business sites, parts of news websites, academic journal websites, gaming websites, social networking sites, websites affording real-time stock market data, websites providing various services (e.g., websites offering storing and/or sharing of images, files, etc.) and many others.

For showing active content of sites or even creating rich internet applications plagins such as Microsoft Silverlight, Adobe Flash, Adobe Shockwave or applets are used. They provide interactivity for the user and real-time updating within web pages (i.e. pages don’t have to be loaded or reloaded to effect any changes), mainly applying the DOM (Document Object Model) and JavaScript.

There are many varieties of websites, each specialising in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include: Affiliate, Archive site, Corporate website, Commerce site, Directory site and many many others (see a detailed classification in Types of websites).

In February 2009, an Internet monitoring company Netcraft, which has tracked web growth since 1995, reported that there were 106,875,138 websites in 2007 and 215,675,903 websites in 2009 with domain names and content on them, compared to just 18,000 Web sites in August 1995.

 PS:  Spellingwhat is the better, what is correct: “website OR “web site?

The form “website” has gradually become the standard spelling. It is used, for instance, by such leading dictionaries and encyclopedias as the Canadian Oxford Dictionary, the Oxford English Dictionary, Wikipedia. Nevertheless, a form “web site” is still widely used, e.g. Encyclopædia Britannica (including its Merriam-Webster subsidiary). Among major Internet technology companies, Microsoft uses “website” and occasionally “web site”, Apple uses “website”, and Google uses “website”, too.

 PSS: Unknown technical terms you can find in the Internet English Vocabulary.

 Reference      Website – Wikipedia, the free encyclopedia

Have You Donated To Wikipedia Already?

Do you use Wikipedia? Do you know that Jimmy Wales, a foundator of Wikipedia, decided to keep Wikipedia advertising free and unbiased. So, they have financial problems with surviving now. Any donation, even a small sum is helpful. Thus, here’s the page where you can donate.

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, subscribe to “Why Technical English”.

Look at the right sidebar and subscribe as you like:

  • by Email subsription … Sign me up        
  • Subsribe with Bloglines
  • Subsribe.ru

Right now within preparing the e-book “Internet English” (see ViCTE Newsletter Number 33 – WWW, Part 1 / August 2011 ) posts on this topic are being published there. Your comments to the posts are welcome.

 Related articles

 


The Semantic Web – great expectations

October 31, 2011
3 Comments

By Galina Vitkova

The Semantic Web brings the further development of the World Wide Web aimed at interpreting the content of the web pages as machine-readable information.

In the classical Web based on HTML web pages the information is comprised in the text or documents which are read and composed into visible or audible for humans web pages by a browser. The Semantic Web is supposed to store information as a semantic network through the use of ontologies. The semantic network is usually a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent relations among the concepts.  An ontology is simply a vocabulary that describes objects and how they relate to one another. So a program-agent is able to mine facts immediately from the Semantic Web and draw logical conclusions based on them. The Semantic Web functions together with the existing Web and uses the protocol HTTP and resource identificators URIs.

The term  Semantic Web was coined by sir Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (W3C) in May 2001 in the journal «Scientific American». Tim Berners-Lee considers the Semantic Web the next step in the developing of the World Wide Web. W3C has adopted and promoted this concept.

Main idea

The Semantic Web is simply a hyper-structure above the existing Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. It is proposed to help computers “read” and use the Web in a more sophisticated way. Metadata can allow more complex, focused Web searches with more accurate results. To paraphrase Tim Berners-Lee the extension will let the Web – currently similar to a giant book – become a giant database. Machine processing of the information in the Semantic Web is enabled by two the most important features of it.

  • First – The all-around application of uniform resource identifiers (URIs), which are known as addresses. Traditionally in the Internet these identifiers are used for pointing hyperlinks to an addressed object (web pages, or e-mail addresses, etc.). In the Semantic Web the URIs are used also for specifying resources, i.e. URI identifies exactly an object. Moreover, in the Semantic Web not only web pages or their parts have URI, but objects of the real world may have URI too (e.g. humans, towns, novel titles, etc.). Furthermore, the abstract resource attribute (e.g. name, position, colour) have their own URI. As the URIs are globally unique they enable to identify the same objects in different places in the Web. Concurrently, URIs of the HTTP protocol (i.e. addresses beginning with http://) can be used as addresses of documents that contain a machine-readable description of these objects.

  • Second – Application of semantic networks and ontologies. Present-day methods of automatic processing information in the Internet are as a rule based on the frequency and lexical analysis or parsing of the text, so it is designated for human perception. In the Semantic Web instead of that the RDF (Resource Description Framework) standard is applied, which uses semantic networks (i.e. graphs, whose vertices and edges have URIs) for representing the information. Statements coded by means of RDF can be further interpreted by ontologies created in compliance with the standards of RDF Schema and OWL (Web Ontology Language) in order to draw logical conclusions. Ontologies are built using so called description logics. Ontologies and schemata help a computer to understand human vocabulary.

 

Semantic Web Technologies

The architecture of the Semantic Web can be represented by the Semantic Web Stack also known as Semantic Web Cake or Semantic Web Layer Cake. The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies, which are standardized for the Semantic Web, are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of the classical hypertext Web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized.

Semantic Web Stack

As shown in the Semantic Web Stack, the following languages or technologies are used to create the Semantic Web. The technologies from the bottom of the stack up to OWL (Web Ontology Langure) are currently standardized and accepted to build Semantic Web applications. It is still not clear how the top of the stack is going to be implemented. All layers of the stack need to be implemented to achieve full visions of the Semantic Web.

  • XML (eXtensible Markup Language) is a set of rules for encoding documents in machine-readable form. It is a markup language like HTML. XML complements (but does not replace) HTML by adding tags that describe data.
  • XML Schema published as a W3C recommendation in May 2001 is one of several XML schema languages. It can be used to express a set of rules to which an XML document must conform in order to be considered ‘valid’.
  • RDF (Resource Description Framework) is a family of W3C specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description of information that is implemented in web resources. RDF does exactly what its name indicates: using XML tags, it provides a framework to describe resources. In RDF terms, everything in the world is a resource. This framework pairs the resource with a specific location in the Web, so the computer knows exactly what the resource is. To do this, RDF uses triples written as XML tags to express this information as a graph. These triples consist of a subject, property and object, which are like the subject, verb and direct object of an English sentence.
  • RDFS (Vocabulary Description Language Schema) provides basic vocabulary for RDF, adds classes, subclasses and properties to resources, creating a basic language framework
  • OWL (Web Ontology Language) is a family of knowledge representation languages for creating ontologies. It extends RDFS being the most complex layer, formalizes ontologies, describes relationships between classes and uses logic to make deductions.
  • SPARQL (Simple Protocol and RDF Query Language) is a RDF query language, which can be used to query any RDF-based data. It enables to retrieve information for semantic web applications.
  • Microdata (HTML)  is an international standard that is applied to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page providing better search results

As mentioned, top layers contain technologies that are not yet standardized or comprise just ideas. May be, the layers Cryptography and Trust are the most uncommon of them. Thus Cryptography ensures and verifies the origin of web statements from a trusted source by a digital signature of RDF statements. Trust to derived statements means that the premises come from the trusted source and that formal logic during deriving new information is reliable.


World Wide Web

September 3, 2011
Leave a Comment

Dear friends of Technical English,

I have just started publishing materials for my projected e-book devoted to the Internet English, i.e. English around the Internet. It means that during a certain period of time I will publish posts which will make basic technical texts in units of the mentioned e-book with a working name Internet English. The draft content of the e-book has already been published on my blog http://traintechenglish.wordpress.com in the newsletter Number 33 – WWW, Part 1 / August 2011. One topic in the list means one unit in the e-book.

Thus you find below the first post of a post series dealing with Internet English. I hope these texts will contribute to develop your professional English and at the same time will bring you topical information about the Internet.    Galina Vitkova

 

World Wide Web

 Composed by Galina Vitkova

The World Wide Web (WWW or simply the Web) is a system of interlinked, hypertext documents that runs over the Internet. A Web browser enables a user to view Web pages that may contain text, images, and other multimedia. Moreover, the browser ensures navigation between the pages using hyperlinks. The Web was created around 1990 by the English Tim Berners-Lee and the Belgian Robert Cailliau working at CERN in Geneva, Switzerland.

Today, the Web and the Internet allow connecti...

Today, the Web and the Internet allow connecti...

The term Web is often mistakenly used as a synonym for the Internet itself, but the Web is a service that operates over the Internet, as e-mail, for example, does. The history of the Internet dates back significantly further than that of the Web.

Basic terms

The World Wide Web is the combination of four basic ideas:

  • The hypertext: a format of information which in a computer environment allows one to move from one part of a document to another or from one document to another through internal connections (called hyperlinks) among these documents;
  • Resource Identifiers: unique identifiers used to locate a particular resource (computer file, document or other resource) on the network – this is commonly known as a URL (Uniform Resource Locator) or URI (Uniform Resource Identifier), although the two have subtle technical differences;
  • The Client-server model of computing: a system in which client software or a client computer makes requests of server software or a server computer that provides the client with resources or services, such as data or files;
  • Markup language: characters or codes embedded in a text, which indicate structure, semantic meaning, or advice on presentation.

 

How the Web works

Viewing a Web page or other resource on the World Wide Web normally begins either by typing the URL of the page into a Web browser, or by following a hypertext link to that page or resource. The act of following hyperlinks from one Web site to another is referred to as browsing or sometimes as surfing the Web. The first step is to resolve the server-name part of the URL into an Internet Protocol address (IP address) by the global, distributed Internet database known as the Domain name system (DNS). The browser then establishes a Transmission Control Protocol (TCP) connection with the server at that IP address.

TCP state diagram

TCP state diagram

The next step is dispatching a HyperText Transfer Protocol (HTTP) request to the Web server in order to require the resource. In the case of a typical Web page, the HyperText Markup Language (HTML) text is first requested and parsed (parsing means a syntactic analysis) by the browser, which then makes additional requests for graphics and any other files that form a part of the page in quick succession. After that the Web browser renders (see a note at the end of this paragraph) the page as described by the HyperText Markup Language (HTML), Cascading Style Sheets (CSS) and other files received, incorporating the images and other resources as necessary. This produces the on-screen page that the viewer sees.

Notes:

  • Rendering is the process of generating an image from a model by means of computer programs.
  • Cascading Style Sheets (CSS) is a style sheet language used to describe the look and formatting of a document written in a markup language.

 

Web standards

At its core, the Web is made up of three standards:

  • the Uniform Resource Identifier (URI), which is a string of characters used to identify a name or a resource on the Internet;
  • the HyperText Transfer Protocol (HTTP), which presents a networking protocol for distributed, collaborative, hypermedia information systems, HTTP is the foundation of data communication on the Web;
  • the HyperText Markup Language (HTML), which is the predominant markup language for web pages. A markup language presents a modern system for annotating a text in a way that is syntactically distinguishable from that text.

 


Study in Ireland

June 12, 2011
Leave a Comment
Dear friends of Technical English,
Here below you find a description of how my former student sees his experience with studying in Ireland. Nowadays there are many opportunities for studying and teaching everywhere across Europe. Learn Technical English and you can get staying at some Europe´s technical university.  Galina Vitkova
 
All Ireland Flag

All Ireland Flag

My study in Ireland

By David Jirovec

I spent 8.5 months (both winter and summer semesters) in Ireland within the EU programme Erasmus. In Cork, Ireland‘s second biggest city, I was studying computer science, the same subject as at the Czech Technical University (CTU) in Prague About studying in Ireland, namely at the Cork Institute of Technology(CIT), it is rather similar to studying at a high school in Bohemia. A student attends his/her class of about 20 participants and these people study nearly all courses together. We were recommended to choose one of these classes and join it. But since I am in my final year at CTU, I couldn’t find any class with suitable combination of courses. So finally, I took each course with a different class. 

Cork City Marathon 2011

Cork City Marathon 2011

These small classes are set for both lectures and labs, so there are no extended lectures for 200 participants as at CTU. Students are never asked to go to and show something at the blackboard to whole class, results of any student’s tests are never shown to other students.

Exams are carried out only in a written form. They take place in very big halls, where students from different courses are present at the same time. Very strict security measures are held there, students cannot take any bags with themselves, it is forbidden to have even a mobile phone there. Exams are easier than at CTU, sometimes it is like choosing 3 questions out of total 5 and answering them, instead of solving all questions. Worse is that there are no 3 free exam attempts as at CTU. If a student fails once, it is possible to try again in the summer, but it costs some euros. There is no a given minimum of points for any test, it is only  necessary to have a sum of at least 40/100 points at the end of a semester for both in semester work and exams. And no compulsory attendance at any classes is required.

Seat of the Rectorate of the Czech Technical U...

Seat of the Rectorate of CTU in Prague

 
Relationships between students and teachers are very good, teachers are friendly and helpful. I had no problems with my English in classes, teachers were easy to understand, but sometimes it was more difficult to understand the students, especially when they were talking to each other. I don’t see much improvement in my English grammar, but my communication skills in English improved much. It was definitely very profitable to use English for all day-to-day tasks and conversation, and observe the little differences between English commonly used in Ireland and English language taught at school in Prague. Irish people speak English, which mostly is just a slang language. So, I recommend anybody who is going to visit Ireland to apply http://www.urbandictionary.com/define.php?term=what%27s+the+craic%3F in order to understand phrases brought about by Celtic community dialects.

PS The ERASMUS Programme – studying in Europe and more – is the EU’s flagship education and training programme enabling 200 000 students to study and work abroad each year. In addition, it funds co-operation between higher education institutions across Europe. The programme not only supports students, but also professors and business staff who want to teach abroad, as well as helping university staff to receive training. European Commission , Education & Training (http://ec.europa.eu/education/lifelong-learning-programme/doc80_en.htm)


Fuel cycle in fusion reactors

May 25, 2011
Leave a Comment

Composed by Galina Vitkova

Common notes

The basic concept behind any fusion reaction is to bring two or more nuclei close enough together, so that the nuclear force in nuclei will pull them together into one larger nucleus. If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses (though this is not always the case). The difference in mass is released as energy according to Albert Einstein’s mass-energy equivalence formula E = mc2. If the input nuclei are sufficiently massive, the resulting fusion product will be heavier than the sum of the reactants’ original masses. Due to it the reaction requires an external source of energy. The dividing line between “light” and “heavy” nuclei is iron-56. Above this atomic mass, energy will generally be released by nuclear fission reactions; below it, by fusion.

Fusion between the nuclei is opposed by their shared electrical charge, specifically the net positive charge of the protons in the nucleus. In response to it some external sources of energy must be supplied to overcome this electrostatic force. The easiest way to achieve this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as nuclei. In most experiments the nuclei and electrons are left in a fluid known as a plasma. The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge. Thus hydrogen, which has the smallest nuclear charge, reacts at the lowest temperature. Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. As a consequence, most fusion reactions combine isotopes of hydrogen (“protium“, deuterium, or tritium) to form isotopes of helium.

In both magnetic confinement and inertial confinement fusion reactor designs tritium is used as a fuel. The experimental fusion reactor ITER (see also The Project ITER – past and present) and the National Ignition Facility (NIF) will use deuterium-tritium fuel. The deuterium-tritium reaction is favorable since it has the largest fusion cross-section, which leads to the greater probability of a fusion reaction occurrence.

Deuterium-tritium (D-T) fuel cycle

D-T fusion

Deuterium-tritium (D-T) fusion

 

The easiest and most immediately promising nuclear reaction to be used for fusion power is deuterium-tritium Fuel cycle. Hydrogen-2 (Deuterium) is a naturally occurring isotope of hydrogen and as such is universally available. Hydrogen-3 (Tritium) is also an isotope of hydrogen, but it occurs naturally in only negligible amounts as a result of its radioactive half-life of 12.32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium. Most reactor designs use the naturally occurring mix of lithium isotopes.

Several drawbacks are commonly attributed to the D-T fuel cycle of the fusion power:

  1. It produces substantial amounts of neutrons that result in induced radioactivity within the reactor structure.
  2. The use of D-T fusion power depends on lithium resources, which are less abundant than deuterium resources.
  3. It requires the handling of the radioisotope tritium. Similar to hydrogen, tritium is difficult to contain and may leak from reactors in certain quantity. Hence, some estimates suggest that this would represent a fairly large environmental release of radioactivity.

Problems with material design

The huge neutron flux expected in a commercial D-T fusion reactor poses problems for material design. Design of suitable materials is under way but their actual use in a reactor is not proposed until the generation later ITER (see also The Project ITER – past and present). After a single series of D-T tests at JET (Joint European Torus, the largest magnetic confinement experiment currently in operation), the vacuum vessel of the fusion reactor, which used this fuel, became sufficiently radioactive. So, remote handling needed to be used for the year following the tests.

In a production setting, the neutrons react with lithium in order to create more tritium. This deposits the energy of the neutrons in the lithium, for this reason it should be cooled to remove this energy. This reaction protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, also use lithium inside the reactor core as a key element of the design.

PS: I strongly recommend to read the article FUSION(A Limitless Source Of Energy). It is a competent technical text for studying Technical English. Consequently it offers absorbing information about the topic.

 


Fusion reactors in the world

May 10, 2011
Leave a Comment
Implosion of a fusion microcapsule on the NOVA...

Implosion of a fusion microcapsule

Composed by Galina Vitkova

Fusion power is power generated by nuclear fusion processes. In fusion reactions two light atomic nuclei fuse together to form a heavier nucleus. During the process a comparatively large amount of energy is released.

The term “fusion power” is commonly used to refer to potential commercial production of usable power from a fusion source, comparable to the usage of the term “steam power”. Heat from the fusion reactions is utilized to operate a steam turbine which in turn drives electrical generators, similar to the process used in fossil fuel and nuclear fission power stations.

Fusion power has significant safety advantages in comparison with current power stations based on nuclear fission. Fusion only takes place under very limited and controlled conditions So, a failure of precise control or pause of fueling quickly shuts down fusion power reactions. There is no possibility of runaway heat build-up or large-scale release of radioactivity, little or no atmospheric pollution. Furthermore, the power source comprises light elements in small quantities, which are easily obtained and largely harmless to life, the waste products are short-lived in terms of radioactivity. Finally, there is little overlap with nuclear weapons technology.

 

Fusion Power Grid

Fusion Power Grid

 

Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections which were extended by several decades. More than 60 years after the first attempts, commercial fusion power production is still believed to be unlikely before 2040.

The leading designs for controlled fusion research use magnetic (tokamak design) or inertial (laser) confinement of a plasma.

Magnetic confinement of a plasma

The tokamak (see also Number 29 – Easy such and so / April 2011, Nuclear powertokamaks), using magnetic confinement of a plasma, dominates modern research. Very large projects like ITER (see also  The Project ITER – past and present) are expected to pass several important turning points toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful. Unfortunately, previous generations of tokamak machines have revealed new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly. So, ITER is optimistically considered to meet its goals. If successful, ITER would be followed by a “commercial demonstrator” system. The system is supposed to be similar in purpose to the very earliest power-producing fission reactors built in the period before wide-scale commercial deployment of larger machines started in the 1960s and 1970s.

Ultrascale scientific computing, combined with...

Ultrascale scientific computing

 

Stellarators, which also use magnetic confinement of a plasma, are the earliest controlled fusion devices. The stellator was invented by Lyman Spitzer in 1950 and built the next year at what later became the Princeton Plasma Physics Laboratory. The name “stellarator” originates from the possibility of harnessing the power source of the sun, a stellar object.

Stellarators were popular in the 1950s and 60s, but the much better results from tokamak designs led to their falling from favor in the 1970s. More recently, in the 1990s, problems with the tokamak concept have led to renewed interest in the stellarator design, and a number of new devices have been built. Some important modern stellarator experiments are Wendelstein, in Germany, and the Large Helical Device, inJapan.

Inertial confinement fusion

Inertial confinement fusion (ICF) is a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet. The pellets most often contain a mixture of deuterium and tritium.

Inertial confinement fusion

Inertial confinement fusion

 

To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons, almost all ICF devices to date have used lasers. The aim of ICF is to produce a state known as “ignition”, where this heating process causes a chain reaction that burns a significant portion of the fuel. Typical fuel pellets are about the size of a pinhead and contain around 10 milligrams of fuel. In practice, only a small proportion of this fuel will undergo fusion, but if all this fuel were consumed it would release the energy equivalent to burning a barrel of oil.

To date most of the work in ICF has been carried out in Franceand the United States, and generally has seen less development effort than magnetic approaches. Two large projects are currently underway, the Laser Mégajoule in France and the National Ignition Facility in theUnited States.

All functioning fusion reactors are listed in eFusion experimental devices classified by a confinement method.

 Reference: Wikipedia, the free encyclopedia http://en.wikipedia


The Project ITER – past and present

April 30, 2011
Leave a Comment

Composed by Galina Vitkova

 

The logo of the ITER Organization

The logo of the ITER Organization

 

„We firmly believe that to harness fusion energy is the only way to reconcile huge conflicting demands which will confront humanity sooner or later“

Director-General Osamu Motojima,  Opening address, Monaco International ITER Fusion Energy Days, 23 November 2010

 

ITER was originally an acronym for International Thermonuclear Experimental Reactor, but that title was dropped in view of the negatively popular connotation of “thermonuclear“, especially in conjunction with “experimental”. “Iter” also means “journey”, “direction” or “way” in Latin, taking into consideration ITER potential role in harnessing nuclear fusion (see also The ViCTE Newsletter Number 28 – SVOMT revising/March 2011 Nuclear power – fission and fusion) as a peaceful power source.

ITER is a large-scale scientific project intended to prove the practicability of fusion as an energy source, to prove that it can work without negative impact. Moreover, it is expected to collect the data necessary for the design and subsequent operation of the first electricity-producing fusion power plant. Besides, it aims to demonstrate the possibility to produce commercial energy from fusion. ITER is the culmination of decades of fusion research: more than 200 tokamaks (see also The ViCTE Newsletter Number 29 – Easy such and so / April 2011 Nuclear power – tokamaks) built over the world have paved the way to the ITER experiment. ITER is the result of the knowledge and experience these machines have accumulated. ITER, which will be twice the size of the largest tokamak currently operating, is conceived as the necessary experimental step on the way to a demonstration of a fusion power plant potential.

The scientific goal of the ITER project is to deliver ten times the power it consumes. From 50 MW of input power, the ITER machine is designed to produce 500 MW of fusion power – the first of all fusion experiments producing net energy. During its operational lifetime, ITER will test key technologies necessary for the next step, will develop technologies and processes needed for a fusion power plant – including superconducting magnets and remote handling (maintenance by robot). Furthermore, it will verify tritium breeding concepts, will refine neutron shield/heat conversion technology. As a result the ITER project will demonstrate that a fusion power plant is able to capture fusion energy for commercial use.

Launched as an idea for international collaboration in 1985, now the ITER Agreement includes China, the European Union, India, Japan, Korea, Russia and the United States, representing over half of the world’s population. Twenty years of the design work and complex negotiations have been necessary to bring the project to where it is today.

The ITER Agreement was officially signed at theElyséePalaceinParison21 November 2006by Ministers from the seven ITER Members. In a ceremony hosted by French President Jacques Chirac and the President of the European Commission M. José Manuel Durao Barroso, this Agreement established a legal international entity to be responsible for construction, operation, and decommissioning of ITER.

On24 October 2007, after ratification by all Members, the ITER Agreement entered into force and officially established the ITER Organization. ITER was originally expected to cost approximately €5billion. However, the rising price of raw materials and changes to the initial design have augmented that amount more than triple, i.e. to €16billion.

Cost Breakdown of ITER Reactor

Cost Breakdown of ITER Reactor

 

The program is anticipated to last for 30 years – 10 for construction, and 20 of operation. The reactor is expected to take 10 years to build with completion in 2018. The ITER site in Cadarache, France stands ready: in 2010, construction began on the ITER Tokamak and scientific buildings. The seven ITER Members have shared in the design of the installation, the creation of the international project structure, and in its funding.

Key components for the Tokamak will be manufactured in the seven Member States and shipped to Franceby sea. From the port in Berre l’Etang on the Mediterranean, the components will be transported by special convoy along the 104 kilometres of the ITER Itinerary to Cadarache. The exceptional size and weight of certain of the Tokamak components made large-scale public works necessary to widen roads, reinforce bridges and modify intersections. Costs were shared by the Bouches-du-Rhône department Council (79%) and theFrenchState (21%). Work on the Itinerary was completed in December, 2010.

Two trial convoys will be organized in 2011 to put the Itinerary’s resistance and design to the test before a full-scale practice convoy in 2012, and the arrival of the first components for ITER by sea.

Between 2012 and 2017, 200 exceptional convoys will travel by night at reduced speeds along the ITER Itinerary, bypassing 16 villages, negotiating 16 roundabouts, and crossing 35 bridges.

Manufacturing of components for ITER has already begun in Members industries all over the world. So, the level of coordination required for the successful fabrication of over one million parts for the ITER Tokamak alone is daily creating a new model of international scientific collaboration.

ITER, without question, is a very complex project. Building ITER will require a continuous and joint effort involving all partners. In any case, this project remains a challenging task and for most of participants it is a once-in-a-lifetime opportunity to contribute to such a fantastic endeavour.

 

References:


Game Theory in Computer Science

January 25, 2011
Leave a Comment


        By Galina Vitkova  

Computer science or computing science (sometimes abbreviated CS) is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems. It concerns the systematic study of algorithmic processes that describe and transform information. Computer science has many sub-fields. For example, computer graphics, computational complexity theory (studies the properties of computational problems), programming language theory (studies approaches to describing computations), computer programming (applies specific programming languages to solve specific problems), and human-computer interaction (focuses on making computers universally accessible to people) belong to such very important sub-fields of computer science. 

Game theory has come to play an increasingly important role in computer science. Computer scientists have used games to model interactive computations and for developing communication skills. Moreover, they apply game theory as a theoretical basis to the field of multi-agent systems (MAS), which are systems composed of multiple interacting intelligent agents (or players). Separately, game theory has played a role in online algorithms, particularly in the k-server problem.

Interactive computation is a kind of computation that involves communication with the external world during the computation. This is in contrast to the traditional understanding of computation which assumes a simple interface between a computing agent and its environment. Unfortunately, a definition of adequate mathematical models of interactive computation remains a challenge for computer scientists. 

 
An online algorithm is the one that can process its input piece-by-piece in a serial mode, i.e. in the order that the input is fed to the algorithm, without having the entire input available from the start of the computation. On the contrary, an offline algorithm is given the whole problem data from the beginning and it is required to output an answer which solves the problem at hand.    

An animation of the quicksort algorithm sortin...

Image via Wikipedia

 (For example, selection sort requires that the entire list be given before it can sort it, while insertion sort doesn’t.) As the whole input is not known, an online algorithm is forced to make decisions that may later turn out not to be optimal. Thus the study of online algorithms has focused on the quality of decision-making that is possible in this setting.

The Canadian Traveller Problem exemplifies the concepts of online algorithms. The goal of this problem is to minimize the cost of reaching a target in a weighted graph where some of the edges are unreliable and may have been removed from the graph. However, the fact that an edge was removed (failed) is only revealed to the traveller when she/he reaches one of the edge’s endpoints. The worst case in study of this problem is simply a situation when all of the unreliable edges fail and the problem reduces to the usual Shortest Path Problem. This 

Johnson's algorithm for transforming a shortes...

Image via Wikipedia

 

 problem concerns detecting a path between two vertices (or nodes) of the graph such that the sum of the weights of its edges is minimized. An example is finding the quickest way to get from one location to another on a road map. In this case, the nodes represent locations, the edges represent segments of road and are weighted by the time needed to travel that segment.

The k-server problem is a problem of theoretical computer science in the category of online algorithms. In this problem, an online algorithm must control the movement of a set of k servers, represented as points in a metric space, and handle requests that are also given in the form of points in the space. As soon as a request arrives, the algorithm must determine which server to be moved to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.

The problem was first posed in 1990. The most prominent open question concerning the k-server problem is the so-called k-server conjecture. This conjecture states that there is an algorithm for solving the k-server problem in an arbitrary metric space and for any number k of servers. The special case of metrics in which all distances are equal is called the paging problem because it models the problem of page replacement algorithms in memory caches. In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than a set threshold. 

 


Next Page »

    August 2017
    M T W T F S S
    « Jul    
     123456
    78910111213
    14151617181920
    21222324252627
    28293031  

    Blog Stats

    • 203,173 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz

    Twitter Updates

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.