Why Technical English

Website – basic information

November 28, 2011
7 Comments

Website and Its Characteristics

                                                                                             Composed by Galina Vitkova using Wikipedia

A website (or web site) is a collection of web pages, typically common to a particular domain name on the Internet. A web page is a document usually written in HTML (Hyper Text Markup Language), which is almost always accessible via HTTP (Hyper-Text Transport Protocol). HTTP is a protocol that transfers information from the website server to display it in the user’s web browser. All publicly accessible web sites constitute the immense World Wide Web of information. More formally a web site might be considered a collection of pages dedicated to a similar or identical subject or purpose and hosted through a single domain.

The pages of a website are approached from a common root URL (Uniform Resource Locator or Universal Resource Locator) called the homepage, and usually reside on the same physical server. The URLs of the pages organise them into a hierarchy. Nonetheless, the hyperlinks between web pages regulate how the reader perceives the overall structure and how the traffic flows between the different parts of the sites. The first on-line website appeared in 1991 in CERN (European Organization for Nuclear Research situated in the suburbs of Geneva on the Franco–Swiss border) – for more information see ViCTE Newsletter Number 5 – WWW History (Part1) / May 2009, Number 6 – WWW History (Part2) / June 2009.

A website may belong to an individual, a business or other organization. Any website can contain hyperlinks to any other web site, so the differentiation one particular site from another may sometimes be difficult for the user.

Websites are commonly written in, or dynamically converted to, HTML and are accessed using a web browser. Websites can be approached from a number of computer based and Internet enabled devices, including desktop computers, laptops, PDAs (personal digital assistant or personal data assistant) and cell phones.

Website Drafts and Notes

Image by Jayel Aheram via Flickr

A website is hosted on a computer system called a web server or an HTTP server. These terms also refer to the software that runs on the servers and that retrieves and delivers the web pages in response to users´ requests.

Static and dynamic websites are distinguished. A static website is one that has content which is not expected to change frequently and is manually maintained by a person or persons via editor software. It provides the same available standard information to all visitors for a certain period of time between updating of the site.

A dynamic website is one that has frequently changing information or interacts with the user from various situation (HTTP cookies or database variables e.g., previous history, session variables, server side variables, etc.) or direct interaction (form elements, mouseovers, etc.). When the web server receives a request for a given page, the page is automatically retrieved from storage by the software. A site can display the current state of a dialogue between users, can monitor a changing situation, or provide information adapted in some way for the particular user.

Static content may also be dynamically generated either periodically or if certain conditions for regeneration occur in order to avoid the performance loss of initiating the dynamic engine

Website Designer & SEO Company Lexington Devel...
Image by temptrhonda via Flickr

Some websites demand a subscription to access some or all of their content. Examples of subscription websites include numerous business sites, parts of news websites, academic journal websites, gaming websites, social networking sites, websites affording real-time stock market data, websites providing various services (e.g., websites offering storing and/or sharing of images, files, etc.) and many others.

For showing active content of sites or even creating rich internet applications plagins such as Microsoft Silverlight, Adobe Flash, Adobe Shockwave or applets are used. They provide interactivity for the user and real-time updating within web pages (i.e. pages don’t have to be loaded or reloaded to effect any changes), mainly applying the DOM (Document Object Model) and JavaScript.

There are many varieties of websites, each specialising in a particular type of content or use, and they may be arbitrarily classified in any number of ways. A few such classifications might include: Affiliate, Archive site, Corporate website, Commerce site, Directory site and many many others (see a detailed classification in Types of websites).

In February 2009, an Internet monitoring company Netcraft, which has tracked web growth since 1995, reported that there were 106,875,138 websites in 2007 and 215,675,903 websites in 2009 with domain names and content on them, compared to just 18,000 Web sites in August 1995.

 PS:  Spellingwhat is the better, what is correct: “website OR “web site?

The form “website” has gradually become the standard spelling. It is used, for instance, by such leading dictionaries and encyclopedias as the Canadian Oxford Dictionary, the Oxford English Dictionary, Wikipedia. Nevertheless, a form “web site” is still widely used, e.g. Encyclopædia Britannica (including its Merriam-Webster subsidiary). Among major Internet technology companies, Microsoft uses “website” and occasionally “web site”, Apple uses “website”, and Google uses “website”, too.

 PSS: Unknown technical terms you can find in the Internet English Vocabulary.

 Reference      Website – Wikipedia, the free encyclopedia

Have You Donated To Wikipedia Already?

Do you use Wikipedia? Do you know that Jimmy Wales, a foundator of Wikipedia, decided to keep Wikipedia advertising free and unbiased. So, they have financial problems with surviving now. Any donation, even a small sum is helpful. Thus, here’s the page where you can donate.

Dear visitor,  If you want to improve your professional English and at the same time to gain basic comprehensive targetted information about the Internet and Web, subscribe to “Why Technical English”.

Look at the right sidebar and subscribe as you like:

  • by Email subsription … Sign me up        
  • Subsribe with Bloglines
  • Subsribe.ru

Right now within preparing the e-book “Internet English” (see ViCTE Newsletter Number 33 – WWW, Part 1 / August 2011 ) posts on this topic are being published there. Your comments to the posts are welcome.

 Related articles

 

Advertisements

The Semantic Web – great expectations

October 31, 2011
3 Comments

By Galina Vitkova

The Semantic Web brings the further development of the World Wide Web aimed at interpreting the content of the web pages as machine-readable information.

In the classical Web based on HTML web pages the information is comprised in the text or documents which are read and composed into visible or audible for humans web pages by a browser. The Semantic Web is supposed to store information as a semantic network through the use of ontologies. The semantic network is usually a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent relations among the concepts.  An ontology is simply a vocabulary that describes objects and how they relate to one another. So a program-agent is able to mine facts immediately from the Semantic Web and draw logical conclusions based on them. The Semantic Web functions together with the existing Web and uses the protocol HTTP and resource identificators URIs.

The term  Semantic Web was coined by sir Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (W3C) in May 2001 in the journal «Scientific American». Tim Berners-Lee considers the Semantic Web the next step in the developing of the World Wide Web. W3C has adopted and promoted this concept.

Main idea

The Semantic Web is simply a hyper-structure above the existing Web. It extends the network of hyperlinked human-readable web pages by inserting machine-readable metadata about pages and how they are related to each other. It is proposed to help computers “read” and use the Web in a more sophisticated way. Metadata can allow more complex, focused Web searches with more accurate results. To paraphrase Tim Berners-Lee the extension will let the Web – currently similar to a giant book – become a giant database. Machine processing of the information in the Semantic Web is enabled by two the most important features of it.

  • First – The all-around application of uniform resource identifiers (URIs), which are known as addresses. Traditionally in the Internet these identifiers are used for pointing hyperlinks to an addressed object (web pages, or e-mail addresses, etc.). In the Semantic Web the URIs are used also for specifying resources, i.e. URI identifies exactly an object. Moreover, in the Semantic Web not only web pages or their parts have URI, but objects of the real world may have URI too (e.g. humans, towns, novel titles, etc.). Furthermore, the abstract resource attribute (e.g. name, position, colour) have their own URI. As the URIs are globally unique they enable to identify the same objects in different places in the Web. Concurrently, URIs of the HTTP protocol (i.e. addresses beginning with http://) can be used as addresses of documents that contain a machine-readable description of these objects.

  • Second – Application of semantic networks and ontologies. Present-day methods of automatic processing information in the Internet are as a rule based on the frequency and lexical analysis or parsing of the text, so it is designated for human perception. In the Semantic Web instead of that the RDF (Resource Description Framework) standard is applied, which uses semantic networks (i.e. graphs, whose vertices and edges have URIs) for representing the information. Statements coded by means of RDF can be further interpreted by ontologies created in compliance with the standards of RDF Schema and OWL (Web Ontology Language) in order to draw logical conclusions. Ontologies are built using so called description logics. Ontologies and schemata help a computer to understand human vocabulary.

 

Semantic Web Technologies

The architecture of the Semantic Web can be represented by the Semantic Web Stack also known as Semantic Web Cake or Semantic Web Layer Cake. The Semantic Web Stack is an illustration of the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. It shows how technologies, which are standardized for the Semantic Web, are organized to make the Semantic Web possible. It also shows how Semantic Web is an extension (not replacement) of the classical hypertext Web. The illustration was created by Tim Berners-Lee. The stack is still evolving as the layers are concretized.

Semantic Web Stack

As shown in the Semantic Web Stack, the following languages or technologies are used to create the Semantic Web. The technologies from the bottom of the stack up to OWL (Web Ontology Langure) are currently standardized and accepted to build Semantic Web applications. It is still not clear how the top of the stack is going to be implemented. All layers of the stack need to be implemented to achieve full visions of the Semantic Web.

  • XML (eXtensible Markup Language) is a set of rules for encoding documents in machine-readable form. It is a markup language like HTML. XML complements (but does not replace) HTML by adding tags that describe data.
  • XML Schema published as a W3C recommendation in May 2001 is one of several XML schema languages. It can be used to express a set of rules to which an XML document must conform in order to be considered ‘valid’.
  • RDF (Resource Description Framework) is a family of W3C specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description of information that is implemented in web resources. RDF does exactly what its name indicates: using XML tags, it provides a framework to describe resources. In RDF terms, everything in the world is a resource. This framework pairs the resource with a specific location in the Web, so the computer knows exactly what the resource is. To do this, RDF uses triples written as XML tags to express this information as a graph. These triples consist of a subject, property and object, which are like the subject, verb and direct object of an English sentence.
  • RDFS (Vocabulary Description Language Schema) provides basic vocabulary for RDF, adds classes, subclasses and properties to resources, creating a basic language framework
  • OWL (Web Ontology Language) is a family of knowledge representation languages for creating ontologies. It extends RDFS being the most complex layer, formalizes ontologies, describes relationships between classes and uses logic to make deductions.
  • SPARQL (Simple Protocol and RDF Query Language) is a RDF query language, which can be used to query any RDF-based data. It enables to retrieve information for semantic web applications.
  • Microdata (HTML)  is an international standard that is applied to nest semantics within existing content on web pages. Search engines, web crawlers, and browsers can extract and process Microdata from a web page providing better search results

As mentioned, top layers contain technologies that are not yet standardized or comprise just ideas. May be, the layers Cryptography and Trust are the most uncommon of them. Thus Cryptography ensures and verifies the origin of web statements from a trusted source by a digital signature of RDF statements. Trust to derived statements means that the premises come from the trusted source and that formal logic during deriving new information is reliable.


Fuel cycle in fusion reactors

May 25, 2011
Leave a Comment

Composed by Galina Vitkova

Common notes

The basic concept behind any fusion reaction is to bring two or more nuclei close enough together, so that the nuclear force in nuclei will pull them together into one larger nucleus. If two light nuclei fuse, they will generally form a single nucleus with a slightly smaller mass than the sum of their original masses (though this is not always the case). The difference in mass is released as energy according to Albert Einstein’s mass-energy equivalence formula E = mc2. If the input nuclei are sufficiently massive, the resulting fusion product will be heavier than the sum of the reactants’ original masses. Due to it the reaction requires an external source of energy. The dividing line between “light” and “heavy” nuclei is iron-56. Above this atomic mass, energy will generally be released by nuclear fission reactions; below it, by fusion.

Fusion between the nuclei is opposed by their shared electrical charge, specifically the net positive charge of the protons in the nucleus. In response to it some external sources of energy must be supplied to overcome this electrostatic force. The easiest way to achieve this is to heat the atoms, which has the side effect of stripping the electrons from the atoms and leaving them as nuclei. In most experiments the nuclei and electrons are left in a fluid known as a plasma. The temperatures required to provide the nuclei with enough energy to overcome their repulsion is a function of the total charge. Thus hydrogen, which has the smallest nuclear charge, reacts at the lowest temperature. Helium has an extremely low mass per nucleon and therefore is energetically favoured as a fusion product. As a consequence, most fusion reactions combine isotopes of hydrogen (“protium“, deuterium, or tritium) to form isotopes of helium.

In both magnetic confinement and inertial confinement fusion reactor designs tritium is used as a fuel. The experimental fusion reactor ITER (see also The Project ITER – past and present) and the National Ignition Facility (NIF) will use deuterium-tritium fuel. The deuterium-tritium reaction is favorable since it has the largest fusion cross-section, which leads to the greater probability of a fusion reaction occurrence.

Deuterium-tritium (D-T) fuel cycle

D-T fusion

Deuterium-tritium (D-T) fusion

 

The easiest and most immediately promising nuclear reaction to be used for fusion power is deuterium-tritium Fuel cycle. Hydrogen-2 (Deuterium) is a naturally occurring isotope of hydrogen and as such is universally available. Hydrogen-3 (Tritium) is also an isotope of hydrogen, but it occurs naturally in only negligible amounts as a result of its radioactive half-life of 12.32 years. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium. Most reactor designs use the naturally occurring mix of lithium isotopes.

Several drawbacks are commonly attributed to the D-T fuel cycle of the fusion power:

  1. It produces substantial amounts of neutrons that result in induced radioactivity within the reactor structure.
  2. The use of D-T fusion power depends on lithium resources, which are less abundant than deuterium resources.
  3. It requires the handling of the radioisotope tritium. Similar to hydrogen, tritium is difficult to contain and may leak from reactors in certain quantity. Hence, some estimates suggest that this would represent a fairly large environmental release of radioactivity.

Problems with material design

The huge neutron flux expected in a commercial D-T fusion reactor poses problems for material design. Design of suitable materials is under way but their actual use in a reactor is not proposed until the generation later ITER (see also The Project ITER – past and present). After a single series of D-T tests at JET (Joint European Torus, the largest magnetic confinement experiment currently in operation), the vacuum vessel of the fusion reactor, which used this fuel, became sufficiently radioactive. So, remote handling needed to be used for the year following the tests.

In a production setting, the neutrons react with lithium in order to create more tritium. This deposits the energy of the neutrons in the lithium, for this reason it should be cooled to remove this energy. This reaction protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, also use lithium inside the reactor core as a key element of the design.

PS: I strongly recommend to read the article FUSION(A Limitless Source Of Energy). It is a competent technical text for studying Technical English. Consequently it offers absorbing information about the topic.

 


The Project ITER – past and present

April 30, 2011
Leave a Comment

Composed by Galina Vitkova

 

The logo of the ITER Organization

The logo of the ITER Organization

 

„We firmly believe that to harness fusion energy is the only way to reconcile huge conflicting demands which will confront humanity sooner or later“

Director-General Osamu Motojima,  Opening address, Monaco International ITER Fusion Energy Days, 23 November 2010

 

ITER was originally an acronym for International Thermonuclear Experimental Reactor, but that title was dropped in view of the negatively popular connotation of “thermonuclear“, especially in conjunction with “experimental”. “Iter” also means “journey”, “direction” or “way” in Latin, taking into consideration ITER potential role in harnessing nuclear fusion (see also The ViCTE Newsletter Number 28 – SVOMT revising/March 2011 Nuclear power – fission and fusion) as a peaceful power source.

ITER is a large-scale scientific project intended to prove the practicability of fusion as an energy source, to prove that it can work without negative impact. Moreover, it is expected to collect the data necessary for the design and subsequent operation of the first electricity-producing fusion power plant. Besides, it aims to demonstrate the possibility to produce commercial energy from fusion. ITER is the culmination of decades of fusion research: more than 200 tokamaks (see also The ViCTE Newsletter Number 29 – Easy such and so / April 2011 Nuclear power – tokamaks) built over the world have paved the way to the ITER experiment. ITER is the result of the knowledge and experience these machines have accumulated. ITER, which will be twice the size of the largest tokamak currently operating, is conceived as the necessary experimental step on the way to a demonstration of a fusion power plant potential.

The scientific goal of the ITER project is to deliver ten times the power it consumes. From 50 MW of input power, the ITER machine is designed to produce 500 MW of fusion power – the first of all fusion experiments producing net energy. During its operational lifetime, ITER will test key technologies necessary for the next step, will develop technologies and processes needed for a fusion power plant – including superconducting magnets and remote handling (maintenance by robot). Furthermore, it will verify tritium breeding concepts, will refine neutron shield/heat conversion technology. As a result the ITER project will demonstrate that a fusion power plant is able to capture fusion energy for commercial use.

Launched as an idea for international collaboration in 1985, now the ITER Agreement includes China, the European Union, India, Japan, Korea, Russia and the United States, representing over half of the world’s population. Twenty years of the design work and complex negotiations have been necessary to bring the project to where it is today.

The ITER Agreement was officially signed at theElyséePalaceinParison21 November 2006by Ministers from the seven ITER Members. In a ceremony hosted by French President Jacques Chirac and the President of the European Commission M. José Manuel Durao Barroso, this Agreement established a legal international entity to be responsible for construction, operation, and decommissioning of ITER.

On24 October 2007, after ratification by all Members, the ITER Agreement entered into force and officially established the ITER Organization. ITER was originally expected to cost approximately €5billion. However, the rising price of raw materials and changes to the initial design have augmented that amount more than triple, i.e. to €16billion.

Cost Breakdown of ITER Reactor

Cost Breakdown of ITER Reactor

 

The program is anticipated to last for 30 years – 10 for construction, and 20 of operation. The reactor is expected to take 10 years to build with completion in 2018. The ITER site in Cadarache, France stands ready: in 2010, construction began on the ITER Tokamak and scientific buildings. The seven ITER Members have shared in the design of the installation, the creation of the international project structure, and in its funding.

Key components for the Tokamak will be manufactured in the seven Member States and shipped to Franceby sea. From the port in Berre l’Etang on the Mediterranean, the components will be transported by special convoy along the 104 kilometres of the ITER Itinerary to Cadarache. The exceptional size and weight of certain of the Tokamak components made large-scale public works necessary to widen roads, reinforce bridges and modify intersections. Costs were shared by the Bouches-du-Rhône department Council (79%) and theFrenchState (21%). Work on the Itinerary was completed in December, 2010.

Two trial convoys will be organized in 2011 to put the Itinerary’s resistance and design to the test before a full-scale practice convoy in 2012, and the arrival of the first components for ITER by sea.

Between 2012 and 2017, 200 exceptional convoys will travel by night at reduced speeds along the ITER Itinerary, bypassing 16 villages, negotiating 16 roundabouts, and crossing 35 bridges.

Manufacturing of components for ITER has already begun in Members industries all over the world. So, the level of coordination required for the successful fabrication of over one million parts for the ITER Tokamak alone is daily creating a new model of international scientific collaboration.

ITER, without question, is a very complex project. Building ITER will require a continuous and joint effort involving all partners. In any case, this project remains a challenging task and for most of participants it is a once-in-a-lifetime opportunity to contribute to such a fantastic endeavour.

 

References:


Nuclear energy future after Fukushima

March 23, 2011
11 Comments
Composed by Galina Vitkova

What the damage to the Fukushima plant (see picture below) forecasts for Japan—and the world? But first, let us introduce general description of nuclear power stations in order to sense problems caused by the breakdown. 

 

The Fukushima 1 NPP

Image via Wikipedia

 Nuclear fission. Nowadays nuclear power stations generate energy using nuclear fission (Fukushima belongs to this type of nuclear power plants). Atoms of uranium (235) rods in the reactor are split in the process of fission and cause a chain reaction with other nuclei. During this process a large amount of energy is released. The energy heats water to create steam, which rotates a turbine together with a generator, producing electricity.

Depending on the type of fission, presumptions for ensuring supply of the fuel at existing level varies from several decades for the Uranium-235 to thousands of years for uranium-238. At the present rate of use, uranium-235 reserves (as of 2007) will be exhausted in about 70 years. The nuclear industry persuades that the cost of fuel makes a minor cost component for fission power. In future, mining of uranium sources could be more expensive, more difficult. However, increasing the price of uranium would have little brought about the overall cost of nuclear power. For instance, a doubling in the cost of natural uranium would increase the total cost of nuclear power by 5 percent. On the other hand, double increasing of natural gas price results in 60 percent growth of the cost of gas-fired power.

The possibility of nuclear meltdowns and other reactor accidents, such as the Three Mile Island accident and the Chernobyl disaster, have caused much public concern. Nevertheless, coal and hydro- power stations have both accompanied by more deaths per energy unit produced than nuclear power generation.

At present, nuclear energy is in decline, according to a 2007 World Nuclear Industry Status Report presented in the European Parliament. The report outlines that the share of nuclear energy in power production decreased in 21 out of 31 countries, with five fewer functioning nuclear reactors than five years ago. Currently 32 nuclear power plants are under construction or in the pipeline, 20 fewer than at the end of the 1990s.

Fusion. Fusion power could solve many of fission power problems. Nevertheless, despite research started in the 1950s, no commercial fusion reactor is expected before 2050. Many technical problems remain unsolved. Proposed fusion reactors commonly use deuterium and lithium as fuel.  Under assumption that a fusion energy output will be kept in the future, then the known lithium reserves would endure 3000 years, lithium from sea water would endure 60 million years. A more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years.

Due to a joint effort of the European Union (EU), America, China, India, Japan, Russia and South Korea a prototype reactor is being constructed on a site in Cadarache (in France). It is supposed to be put into operation by 2018.

Initial projections in 2006 put its price at €10 billion ($13 billion): €5 billion to build and another €5 billion to run and decommission the thing. Since then construction costs alone have tripled.

As the host, the EU is committed to covering 45% of these, with the other partners contributing about 9% each. In May 2010 the European Commission asked member states to conduce an additional €1.4 billion to cope with the project over to 2013. Member states rejected the request.

Sustainability: The environmental movement emphasizes sustainability of energy use and development. “Sustainability” also refers to the ability of the environment to cope with waste products, especially air pollution.

The long-term radioactive waste storage problems of nuclear power have not been fully solved till now. Several countries use underground repositories. Needless to add nuclear waste takes up little space compared to wastes from the chemical industry which remains toxic indefinitely.

Future of nuclear industry. Let us return to how the damage to the Fukushima plant affects future usage of nuclear power in the future in Japan – and in the world.

Share of nuclear electricity production in total domestic production

Nowadays nuclear plants provide about a third of Japan’s electricity (see chart), Fukushima is not the first to be paralysed by an earthquake. But it is the first to be stricken by the technology dependence on a supply of water for cooling.

The 40-year-old reactors in Fukushima run by the Tokyo Electric Power Company faced a disaster beyond anything their designers were required to imagine.

What of the rest of the world? Nuclear industry supporters had hopes of a nuclear renaissance as countries try to reduce carbon emissions. A boom like that of the 1970s is talked, when 25 or so plants started construction each year in rich countries. Public opinion will surely take a dive. At the least, it will be difficult to find the political will or the money to modernise the West ageing reactors, though without modernisation they will not become safer. The heartless images from Fukushima, and the sense of lurching misfortune, will not be forgotten even if final figures unveil little damage to health. France, which has 58 nuclear reactors, seems to see the disaster in Japan as an opportunity rather than an obstacle for its nuclear industry. On March 14th President Nicolas Sarkozy said that French-built reactors have lost international tenders because they are expensive: “but they are more expensive because they are safer.”

However, the region where nuclear power should grow fastest, and seems to be deterred, is the rest of Asia. Two-thirds of the 62 plants under construction in the world are in Asia. Russia plans another ten. By far the most important arising nuclear power is China, which has 13 working reactors and 27 more on the way. China has announced a pause in nuclear commissioning, and a review. But its leaders know that they must go away from coal: the damage to health from a year of Chinese coal-burning plants is bigger then from nuclear industry. And if anyone can build cheap nuclear plants, it is probably the Chinese.

In case the West turns its back on nuclear power and China holds on, the results could be unfortunate. Nuclear plants need trustworthy and transparent regulation.

  References

  • The risks exposed: What the damage to the Fukushima plant portends for Japan—and the world; The Economist, March 19th 2011
  • Expensive Iteration: A huge international fusion-reactor project faces funding difficulties; The Economist, July 22nd 2010  

 

 


Game Theory in Computer Science

January 25, 2011
Leave a Comment


        By Galina Vitkova  

Computer science or computing science (sometimes abbreviated CS) is the study of the theoretical foundations of information and computation and of practical techniques for their implementation and application in computer systems. It concerns the systematic study of algorithmic processes that describe and transform information. Computer science has many sub-fields. For example, computer graphics, computational complexity theory (studies the properties of computational problems), programming language theory (studies approaches to describing computations), computer programming (applies specific programming languages to solve specific problems), and human-computer interaction (focuses on making computers universally accessible to people) belong to such very important sub-fields of computer science. 

Game theory has come to play an increasingly important role in computer science. Computer scientists have used games to model interactive computations and for developing communication skills. Moreover, they apply game theory as a theoretical basis to the field of multi-agent systems (MAS), which are systems composed of multiple interacting intelligent agents (or players). Separately, game theory has played a role in online algorithms, particularly in the k-server problem.

Interactive computation is a kind of computation that involves communication with the external world during the computation. This is in contrast to the traditional understanding of computation which assumes a simple interface between a computing agent and its environment. Unfortunately, a definition of adequate mathematical models of interactive computation remains a challenge for computer scientists. 

 
An online algorithm is the one that can process its input piece-by-piece in a serial mode, i.e. in the order that the input is fed to the algorithm, without having the entire input available from the start of the computation. On the contrary, an offline algorithm is given the whole problem data from the beginning and it is required to output an answer which solves the problem at hand.    

An animation of the quicksort algorithm sortin...

Image via Wikipedia

 (For example, selection sort requires that the entire list be given before it can sort it, while insertion sort doesn’t.) As the whole input is not known, an online algorithm is forced to make decisions that may later turn out not to be optimal. Thus the study of online algorithms has focused on the quality of decision-making that is possible in this setting.

The Canadian Traveller Problem exemplifies the concepts of online algorithms. The goal of this problem is to minimize the cost of reaching a target in a weighted graph where some of the edges are unreliable and may have been removed from the graph. However, the fact that an edge was removed (failed) is only revealed to the traveller when she/he reaches one of the edge’s endpoints. The worst case in study of this problem is simply a situation when all of the unreliable edges fail and the problem reduces to the usual Shortest Path Problem. This 

Johnson's algorithm for transforming a shortes...

Image via Wikipedia

 

 problem concerns detecting a path between two vertices (or nodes) of the graph such that the sum of the weights of its edges is minimized. An example is finding the quickest way to get from one location to another on a road map. In this case, the nodes represent locations, the edges represent segments of road and are weighted by the time needed to travel that segment.

The k-server problem is a problem of theoretical computer science in the category of online algorithms. In this problem, an online algorithm must control the movement of a set of k servers, represented as points in a metric space, and handle requests that are also given in the form of points in the space. As soon as a request arrives, the algorithm must determine which server to be moved to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.

The problem was first posed in 1990. The most prominent open question concerning the k-server problem is the so-called k-server conjecture. This conjecture states that there is an algorithm for solving the k-server problem in an arbitrary metric space and for any number k of servers. The special case of metrics in which all distances are equal is called the paging problem because it models the problem of page replacement algorithms in memory caches. In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than a set threshold. 

 


Beware of danger when playing PC games online

October 30, 2010
Leave a Comment
Example of firewall function: Blocking spyware...

Image via Wikipedia

  
By P. B.

The Internet is a place where the user can find a lot of information, entertainment or work, but on the other side, the same user can “catch” viruses, spyware or malware. Many people don´t understand why somebody creates these harmful programs (see the notes below about these programs). However, the answer is easy – similarly in the common life we can meet a lot of people with wicked goals. And gaining money through special programs is an appropriate goal of many Internet thieves. There are various methods how to do it. On the Internet the user may visit some Web pages which contain  viruses or spyware or malware. It can very often happen to the pages with games because games are considered to be typically connected with gamblers and for this reason it can be a source of money.

But the harmful code  may not be only on the Web pages, games themselves can include it.  It means that the player, when downloading some game and installing it on the local computer, also installs the harmful code without any suspicion. It can be very dangerous – one small example. Imagine the user installed the game that involves a so-called keylogger. The key logger is a small program that records stealthily all keys which the user presses. Many antivirus programs consider this software as a virus (usually as a Trojan-horse – see A worm, a virus or a Trojan horse?). So, the key logger writes all pressed keys to the txt file and sends it to the thief´s e-mail. Suppose, after that the user visited his online betting on the http://www.tipsport.cz, where he had to write the text “www.tipsport.cz” following by username “honza” and password “sazeni123”. The key logger put this string of characters in the txt file “www.tipsport.czhonzasazeni123”. The thief received the file, found this text and was very fast able to connect to the honza-account and transferred all the money from Honza´s Internet account to his (thief´s) own account. It was easy, wasn´t it? Of course, the probability of this coincidence is not very high, but who knows.       

Replica of the Trojan Horse in Troy, Turkey

Image by Alaskan Dude via Flickr

 

Notes:

  • Malware means malicious software – authors of malware create programs for harming  other software. Malware includes PS viruses, trojan-horses, spyware and adware.
  • Spyware is a program that uses the Internet for sending data from the computer without awareness of a user of the computer. It differs from the backdoor by a content of sending data, i.e. it sends only statistic data (e.g. overview of visiting pages, installed programs) and can be used for advertising. The spyware is typically widespread in shareware programs and the authors of the shareware know about it and conciliate it because they want to earn money. 
  • Adware, or advertising-supported software, is any software that automatically downloads advertisements to a computer. The goal of the adware is to generate revenue for its author. Adware, by itself, is harmless; but some adware may come with integrated spyware such as keyloggers.

See more in What is Adware, Spyware and Anti-virus?

 

 


Choose Genres of PC Games for Your Relaxing

October 29, 2010
Leave a Comment
Composed by Galina Vitkova

PC games or more generally video games can be categorized into genres by many factors such as methods of game playing, types of goals, art style and more. Nevertheless, a lack of consensus is typical for accepting formal definitions of game genres. Since genres are dependent on content by definition, they have changed and evolved as newer styles of video games have appeared.

Below commonly used video game genres with brief descriptions and sometimes with       examples are listed. However, Chris Crawford, a well-known computer gamedesigner and writer, notes that “the state of computer game design is changing quickly. We would therefore expect the taxonomy presented here to become obsolete or inadequate in a short time.” So, he recommends to “think of each individual game as belonging to several genres at once.”

Action games      

An action game puts stress on combat. So, players should use quick reflexes, accuracy, and timing to overcome obstacles. It is perhaps the most basic of game genres, and certainly one of the most widespread.

Fighting games emphasize one-on-one fight between two characters, one of which may be computer controlled. This genre first appeared in 1976 with the release of Sega‘s Heavyweight Boxing and later became a phenomenon, particularly in the arcades, with the release of Street Fighter II.

Maze games plot is entirely connected with a maze, which players must navigate. Quick thinking and fast reaction times are advanced by the use of a timer, monsters obstructing the player’s way, or multiple players racing to the finish. The most famous game of this genre is Pac-Man.

Pinball games are intended to replicate the look and feel of a real-life pinball table in virtual reality. Most pinball games hold the same gameplay style as in a real pinball table with some additional possibilities. In recent years they have become more popular on handheld systems, as opposed to consoles.

Platform games (platformers) involve travelling between platforms by jumping (sometimes by swinging or bouncing). Other traditional elements include running and climbing ladders and ledges. Platformers frequently borrow elements from other genres like fighting and shooting.

Shooter games

A shooter game focuses chiefly on combat involving projectile weapons, such as guns and missiles. They can be divided into first-person and third-person shooters, depending on perspective. First-person shooter video games (FPSs) emphasize shooting and combat from the perspective of the character controlled by the player and give the player the feeling of “being there”. Most FPSs are very fast-paced and require quick reflexes on high difficulty levels. Third-person shooter video games (TPSs or 3PSs) involve shooting and combat from a camera perspective, in which the player is seen at a distance. Furthermore, third-person shooters allow more complicated movements such as rolling or diving, as opposed to simple jumping and crouching typical in FPS games.

Official screenshot of Scorched 3D, an artille...

Image via Wikipedia

 

Massively multiplayer online first person shooter games (MMOFPS) combine first-person shooter gameplay with a virtual world in which a large number of players may interact over the Internet. While standard FPS games limit the number of players able to compete in a multiplayer match (generally the maximum is 64), hundreds of players can battle each other on the same server in the game.

A shoot ’em up (or shmup for short), or arcade shooter, is a genre of shooter game in which the player controls a character or vehicle (most often a spacecraft) and shoots large numbers of enemies. Games in this genre call for fast reactions and memorization of enemy patterns. The first game of this type was Spacewar, developed at the Massachusetts Institute of Technology (MIT) in 1961, for the amusement of the developers; it was later released as an arcade game.

Tactical shooters are variations on the first- and third-person shooter genre, which concentrate on realism and highlight tactical play such as planning and teamwork (for example, co-ordination and specialised roles). In single player modes, the player commands a squad of AI controlled characters in addition to his own. In multi-player modes, players must work in teams in order to win the game.

Adventure games 

Adventure games belong to the earliest games created. The player should typically solve various puzzles by interacting with people or the environment, most often in a non-confrontational way. It is considered a “purist” genre and strives to exclude anything which comprises action elements.

A visual novel belongs to adventure games comprising mostly static graphics, usually with anime-style art. They resemble mixed-media novels or tableau vivant stage plays. Many visual novels can have various endings and allow more dynamic reactions to the player’s actions than a typical linear adventure plot. Visual novels are particularly popular in Japan, where they amount to 70% of PC games released.

The interactive movie genre came with the invention of laserdiscs. An interactive movie contains pre-filmed full-motion cartoons or live-action sequences, where the player controls some of the moves of the main character. In these games the only activity the player has is to choose or guess the move the designers intend him to make.

Action-adventure games 

Action-adventure games combine elements of their two component genres, habitually furnishing long-term obstacles that must be overcome almost constantly in the way. Action-adventure games tend to focus on exploration and usually comprise gathering, simple puzzle solving, and combat. “Action-adventure” has become a label attaching to games which do not fit precisely into another well known genre.  

Role-playing video games                                 

Role-playing video games derive their gameplay from traditional role-playing games (RPGs). Cultural differences in role-playing video games have led towards two sets of characteristics sometimes referred to as Western and Eastern RPGs. The first type often involves the player creating a character and a non-linear storyline along which the player makes his own decisions. In the second type, the player controls a party of predefined characters through a dramatically scripted linear storyline.

The action role-playing game is a type of role-playing game which includes elements from action games or action-adventure games. Although a definition of the genre varies, the typical action RPG heavily accents combat and often simplifies or removes non-combat attributes.

Massively multiplayer online role-playing games (MMORPGs) emerged in the mid to late 1990s. Fantasy MMORPGs like The Lord of the Rings Online: Shadows of Angmar, remain the most popular till now.

The tactical role-playing game sub-genre principally refers to games which embody gameplay from strategy games as an alternative to traditional RPG systems. Like standard RPGs, the player controls a finite party and battles, but this genre incorporates strategic gameplay such as tactical movement, too.

   

An elven bardess, a magician and a girl at a l...

Image via Wikipedia

References:   http://en.wikipedia.org/

 


Online game playing

October 25, 2010
3 Comments
By  P. B.

There are a lot of servers on the Internet that provide playing games online. The playing is very easy and many users who have only basic knowledge about computers and the Internet can play these games. The most common way of starting to play is to open the Internet and visit the Google page.   Then in the box for searching write two words: online games and Google immediately offers you many servers, e.g. www.onlinegames.net, www.freeonlinegames.com or Czech pages www.super-games.cz etc. Each server proposes many various games of different sorts. There you may find games for boys, girls, kids, most played games, new games, and others. Or you can select games by a subject, i.e. adventure games, sports games, war games, erotic or strategic games, etc.         

Assigning a path for Leviathan

Image by Alpha Auer, aka. Elif Ayiter via Flickr

Many games have own manual how to play, so the second step is to study the manual. Depending on the subject of a game the user must use, for example, the key Right Arrow to go forward, Left Arrow – to go back, PgUp – to go up, Ctrl – to shoot. It is very easy to understand how to play and recognize what is the goal of the game, e.g. to have maximum points, to kill everything that moves or to be the first in the end. These games are rather simple-minded, but some people become too addicted to them trying to improve their best performance. Sometimes they spend hours before the screen every day and don´t have any idea about time.  

I have tried four different servers and about six different games. In my opinion these games are very easy and for me boring, but for younger users or for people who are bored right now the games can be interesting. However, the most important thing (in my view) is that two of tested servers were infected (my computer warned me that the pages are dangerous and can contain malware, spyware or viruses). My friends, who have problems with their computers in this sense, want me to repair their computer – maybe that is the reason why I don’t like playing games online directly on the Internet.

Quake3 + net_server
Image by [Beta] via Flickr

 

On the other side, I have also tried the game Quake 3 (game demo – not through the Internet, but after installing this game on my computer) and I can affirm that it was  pretty interesting.

 

Quake 3 Arena is a really shooting game. There is no other goal than to kill all other players (but in other versions like Team death match or Capture the flag two teams fight against each other). The player can choose the level of demandingness (from easy to hard) and various places. Quake 3 Arena is the mode where the player fights in the Arena against computer controlled bots (Artificial Intelligent fighters). 

The fighters do battle equipped with various weapons as follows:

  • Gauntlet – a basic weapon for very near fight, usually used only when the player does not have other gun;
  • Machinegun – a thin gun, again applied only when a better gun is not in equipment;
  • Shotgun – a weapon for near fight, 1 shoot every 1 second;
  • Grenade Launcher – shoots grenades;
  • Rocket Launcher – a very popular weapon because its usage is very easy and impact is huge; But the flight of a rocket is slow, so the players get used to shooting at the wall or floor because the rocket has big dispersion;
  • Lighting Gun – an electric gun, very effective because can kill the rival in 2 seconds;
  • Rail gun – a weapon for long distance, very accurate, but has short frequency;
  • Plasma Gun – shoots plasma pulse;
  • BFG10K – the most powerful weapon, but the worst-balanced, and for this reason is not often used by players (BFG = Bio Force Gun).

It is important for the players to find and acquire the armor – the maximum is 200 points armor. The armor provides protection, which absorbs 2/3 of damage. Similarly the players can control their health (from the beginning they have 125 points, which make 100%, and can reach maximum 200 points).

Sometimes (depending on the game) additional features are involved – a Battle suit, Haste (makes movement and shooting twice faster within 30 seconds), Invisibility (for 30 seconds), Medkit, Teleporter (the player is moved to a casual place), Regeneration, Flight (during 60 seconds) and so on.  

  

 


    September 2019
    M T W T F S S
    « Jul    
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    30  

    Blog Stats

    • 204,642 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz