Why Technical English

How bloggers can grasp link building

September 1, 2012
1 Comment

By Galina Vitkova

Many bloggers, unfortunately including me, are not able to use effectively SEO for making their blogs more noticeable and visible on the Internet. One of the most important activities that helps realise it is link building.  So I have learned the topic for some time trying to choose something that makes me fitted for link building. And that’s why now I’d like to talk over competent link building for bloggers.  It is an opportunity for me to compose a valuable technical text and at the same time to better understand the process. I have primarily based my considerations on the article Link Building for Bloggers  by Gregory Ciotti published on 15 June 2012 and references given in this article.

According to the studied references the link building demands, first of all, solution of two main problems:

  1. Where to build links to
  2. How to actually get links.       

Link Building Guidance

Proper places to which your links should point

Best SEO practices recommend building links deep into a blog that includes posts, resource pages, and older content. Three prime places, which are advised bloggers to build links, seem to be feasible or practicable for me too:

1. Resource Pages

Resource pages are pages at the blog that explicate what your blog is about. Moreover, these pages are expected to demonstrate the best content of your blog.

So I have revamped the resource page About at my blog Why Technical English and now it comprises:

  • What is up Technical English and how it differs from general and Business English;
  • The review of significative content at the blog;
  • References on Technical English (last revision on 30 August 2012) useful for studying Technical English.

I have strived to get the resource page About target the most difficult keywords that have the most searches per month (use the Google Keyword Tool to figure this out), i.e. ‘Technical English“. The page is practically linked from all posts on the blog.

2. Blog Posts

The next place to build links is to individual blog posts. The keywords there according the mentioned recommendations may be less competitive, but they should be “long tail”. In general, blog posts are best suited for medium-to-light difficult keywords.

What to Do to Build and Attract Links to Blogs

There are a number of ways that bloggers are suggested both to build and attract links to their blogs and posts. I do not have own experience with link building, but I like three ways recommended in the referenced articles. The ways are as follows:

1. Creating widgets

Widgets, badges (for instance, the simple SEOmoz example, I have placed it at my blog), infographics, and other media that can be embedded by other people and also link back to you.

2. Round-Up posts

People are said to love round-up posts or review posts, and the posts are supposed to work for getting links in almost every niche.  Big round-ups that are niche specific are guaranteed to get mentioned, and more importantly, linked to. I am planning to prepare a round–up post about technical texts for studying Technical English. Needles to emphasize such a post should have a clean layout, simplicity, and focus on the content.

3. Crowdsourced Posts

Crowdsourced posts are posts that include the opinions of many knowledgeable experts, e.g. Social Media Examiner’s “Predictions for Social Media in 2012″. Basically, it is possible to get a bunch of short excerpts from experts’ estimations and put them all in one post.

 To your success!

PS: If you need, you can look up technical terms (used in this post) in Russian and Czech in the Internet English Vocabulary.      

 


100% integration of renewable energies?

August 13, 2011
1 Comment

Composed by Galina Vitkova

The Renewables-Grid-Initiative (RGI) promotes effective integration of 100% electricity produced from renewable energy sources.

EnergyGreenSupply

Energy Green Supply

I do not believe in this statement RGI. I am sure that it is impossible from technical and technological points of view. Simply remind the very low share of renewables in entire production of world electricity (3% without hydroelectricity), very high investment costs and very high prices of electricity produced from renewables nowadays.

Concerns about climate and energy security (especially, in the case of nuclear power plants) are reasons supporting the efforts for a quick transformation towards a largely renewable power sector. The European emissions reduction targets to keep temperature increase below 2°C require the power sector to be fully decarbonised by 2050. Large parts of society demand that the decarbonisation is achieved predominantly with renewable energy sources.

Illustration: Different types of renewable energy.

Different types of renewable energy

Renewables advocates do not speak much about real solutions of real greatly complex problems of renewable sources. Very often they are not aware of them. Even if renewable energy technologies are now established and appreciated by officials and green activists as a key means of producing electricity in a climate and environment friendly way, many crucial problems remain unsolved. Additional power lines, which are needed for transporting electricity from new renewable generation sites to users, raise negative impact on the environment, including biodiversity, ecosystems and the landscape. Furthermore, electricity surpluses, produced by renewables when electricity consumption is very low, causes enormous problems with storage of these surpluses. Besides, there are serious problems with dispatch controlling of a power system with the great penetration (see Variability and intermittency of wind energy in Number 31 – Giving a definition / July 2011) of renewables. On the whole, three the most important problems are waiting to be solved and each of them demands massive investments:

  • building the additional electricity transmission lines in a great amount due to numerous and dispersed renewable sites;
  • accommodation of electricity storage needs in the case of electricity surpluses from renewables;
  • integration of intermittent sources of electricity production in scheduled control of power grids.

Thus, concerns about the impacts of renewables integration in European power systems need to be carefully studied, fully understood and addressed.

Let us closely consider the issues of building new transmission lines. In the coming decade thousands of kilometers of new lines should be built acrossEurope. Renewable energy sources are abundant and vary, but they are mostly available in remote areas where demand is low and economic activities infrequent. Therefore, thorough strategic planning is required to realise a new grid infrastructure that meets the electricity needs of the next 50-70 years. The new grid architecture is supposed to enable the integration of all renewable energy sources – independently from where and when they are generated – to expand the possibility for distributed generation and demand-side management.

Grid expansion is inevitable but often controversial. The transmission system operators (TSOs) need to accommodate not only the 2020 targets but also to prepare for the more challenging full decarbonisation of the power sector by 2050. The non-governmental organisations (NGO Global Network) community is still not united with respect to supporting or opposing the grid expansion. A number of technical, environmental and health questions need to be addressed and clarified to improve a shared understanding among and across TSOs and NGOs. RGI is trying to bring together cooperating TSOs and NGOs.

The grid expansion could be accomplished by means of overhead lines and underground cables. Both of them may transmit alternative current (AC) and direct current (DC). In the past it was relatively easy to select between lines and cables:

Cables mainly used in the grid for shorter distances mostly due to being more expensive and shorter technical lifetime (50% of overhead lines) whereas overhead lines were used in another cases. Nowadays the situation is more complex since more options and more parameters should be considered. In the future cables will prospectively be even more utilised as development is going towards higher power levels.

Cables have higher public acceptance because of their lower disturbance of natural scenery, lower electromagnetic radiation, avoidance of wildlife, higher weather tolerance. The overhead lines unfortunately disturb the scenery and seriously influence wildlife and protected areas.

The grid development for expanding the renewables by means of overhead lines endangers bird populations inEurope. High and large-scale bird mortality from aboveground power lines progresses due to:

  • Risk of electrocution,
  • Risk of collision,
  • Negative impacts on habitats.

And that all makes up a significant threat to birds and other wildlife. For these reasons Standards to protect birds (Habitats and Birds Directives) are being worked out. 

Moreover, the European Commission is currently working on a new legislation to ensure that the energy infrastructure needed for implementing the EU climate and energy targets will be built in time.

References


Fusion reactors in the world

May 10, 2011
Leave a Comment
Implosion of a fusion microcapsule on the NOVA...

Implosion of a fusion microcapsule

Composed by Galina Vitkova

Fusion power is power generated by nuclear fusion processes. In fusion reactions two light atomic nuclei fuse together to form a heavier nucleus. During the process a comparatively large amount of energy is released.

The term “fusion power” is commonly used to refer to potential commercial production of usable power from a fusion source, comparable to the usage of the term “steam power”. Heat from the fusion reactions is utilized to operate a steam turbine which in turn drives electrical generators, similar to the process used in fossil fuel and nuclear fission power stations.

Fusion power has significant safety advantages in comparison with current power stations based on nuclear fission. Fusion only takes place under very limited and controlled conditions So, a failure of precise control or pause of fueling quickly shuts down fusion power reactions. There is no possibility of runaway heat build-up or large-scale release of radioactivity, little or no atmospheric pollution. Furthermore, the power source comprises light elements in small quantities, which are easily obtained and largely harmless to life, the waste products are short-lived in terms of radioactivity. Finally, there is little overlap with nuclear weapons technology.

 

Fusion Power Grid

Fusion Power Grid

 

Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections which were extended by several decades. More than 60 years after the first attempts, commercial fusion power production is still believed to be unlikely before 2040.

The leading designs for controlled fusion research use magnetic (tokamak design) or inertial (laser) confinement of a plasma.

Magnetic confinement of a plasma

The tokamak (see also Number 29 – Easy such and so / April 2011, Nuclear powertokamaks), using magnetic confinement of a plasma, dominates modern research. Very large projects like ITER (see also  The Project ITER – past and present) are expected to pass several important turning points toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful. Unfortunately, previous generations of tokamak machines have revealed new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly. So, ITER is optimistically considered to meet its goals. If successful, ITER would be followed by a “commercial demonstrator” system. The system is supposed to be similar in purpose to the very earliest power-producing fission reactors built in the period before wide-scale commercial deployment of larger machines started in the 1960s and 1970s.

Ultrascale scientific computing, combined with...

Ultrascale scientific computing

 

Stellarators, which also use magnetic confinement of a plasma, are the earliest controlled fusion devices. The stellator was invented by Lyman Spitzer in 1950 and built the next year at what later became the Princeton Plasma Physics Laboratory. The name “stellarator” originates from the possibility of harnessing the power source of the sun, a stellar object.

Stellarators were popular in the 1950s and 60s, but the much better results from tokamak designs led to their falling from favor in the 1970s. More recently, in the 1990s, problems with the tokamak concept have led to renewed interest in the stellarator design, and a number of new devices have been built. Some important modern stellarator experiments are Wendelstein, in Germany, and the Large Helical Device, inJapan.

Inertial confinement fusion

Inertial confinement fusion (ICF) is a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet. The pellets most often contain a mixture of deuterium and tritium.

Inertial confinement fusion

Inertial confinement fusion

 

To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons, almost all ICF devices to date have used lasers. The aim of ICF is to produce a state known as “ignition”, where this heating process causes a chain reaction that burns a significant portion of the fuel. Typical fuel pellets are about the size of a pinhead and contain around 10 milligrams of fuel. In practice, only a small proportion of this fuel will undergo fusion, but if all this fuel were consumed it would release the energy equivalent to burning a barrel of oil.

To date most of the work in ICF has been carried out in Franceand the United States, and generally has seen less development effort than magnetic approaches. Two large projects are currently underway, the Laser Mégajoule in France and the National Ignition Facility in theUnited States.

All functioning fusion reactors are listed in eFusion experimental devices classified by a confinement method.

 Reference: Wikipedia, the free encyclopedia http://en.wikipedia


Tactical Media and games

December 1, 2010
Leave a Comment

Composed by Galina Vitkova

  

Introductory notes

Tactical media is a form of media activism that uses media and communication technologies for social movement and privileges temporary, hit-and-run interventions in the media sphere. Attempts to spread information not available by mainstream news are also called media activism. The term was first introduced in the mid-1990s in Europe and the United States by media theorists and practitioners. Since then, it has been used to describe the practices of a vast array of art and activist groups. Tactical media also shares something with the hacker subculture, and in particular with software and hardware hacks which modify, extend or unlock closed information systems and technologies.

Tactical Media in Video Games

Video games have opened a fully new approach for tactical media artists. This form of media allows a wide range of audiences to be informed of a specific issue or idea. Some examples of games that touch on Tactical Media are Darfur is Dying and September 12. One example of a game design studio that works in tactical media is TAKE ACTION games (TAG). The video game website www.newsgaming.com greatly embodies the idea of tactical media in video games. Newsgaming coins this name as a new genre that brings awareness of current news related issues based on true world events apposed to fantasy worlds that other video games are based upon. It contributes to emerging culture that is largely aimed at raising awareness about important matters in a new and brilliant approach.

Other examples of tactical media within video games include The McDonald’s Game. The author of this game takes information from the executive officers of McDonalds and giving it to the public by informing people about how McDonalds does its business and what means it uses to accomplish it.

Chris Crawford’s Balance of the Planet, made in 1990, is another example of tactical media, in which the game describes environmental issues.

Darfur is Dying description   

Camp of Darfuris internally displaced by the o...

Image via Wikipedia

Origination

It is a browser game about the crisis in Darfur, western Sudan. The game won the Darfur Digital Activist Contest sponsored by the company mtvU ((Music Television for Universities campus)). Released in April 2006, more than 800,000 people had played it by September. It is classified as a serious game, specifically a newsgame.
The game design was led by Susana Ruiz (then a graduate student at the Interactive Media Program at the School of Cinematic Arts at the University of Southern California) as a part of TAKE ACTION games. In October 2005 she was attending the Games for Change conference in New York City, where mtvU announced that they, in partnership with other organizations, were launching the Darfur Digital Activist Contest for a game. The game should also be an advocacy tool about the situation in the Darfur conflict. Since mtvU offered funding and other resources, Ruiz decided to participate in this project.
Ruiz formed a design team and spent two months creating a game design document and prototype. The team spent much of the design phase talking to humanitarian aid workers with experience in Darfur and brainstorming how to make a game that was both interesting to play and was an advocacy tool. The Ruiz team’s beta version was put up for review by the public, along with the other finalists, and was chosen as the winner. The team then received funding to complete the game. The game was officially released at a Save Darfur Coalition rally on 30 March 2006.
Map of Darfur, Sudan (
Image via Wikipedia

 

Gameplay

The game begins with the player choosing a member of a Darfuri family that has been displaced by the conflict. The first of the two modes of the game begins with the player controlling the family member, who travelled from the camp to a well and back, while dodging patrols of the janjaweed militia. If captured, the player is informed what has happened to his/her selected character and asked to select another member of the family and try again. If the water is successfully carried back to the camp, the game switches into its second mode – a top down management view of the camp, where the character must use the water for crops and to build huts. When the water runs out the player must return to the water fetching level to progress. The goal is to keep the camp running for seven days.

 

Original caption states,

Image via Wikipedia

 Reception of the game

The game has been reported by mainstream media sources such as The Washington Post, Time Magazine, BBC News and National Public Radio. In an early September 2006 interview, Ruiz stated that it was difficult to determine success for a game with a social goal, but affirmed that more than 800,000 people had played it 1.7 million times since its release.  Moreover, tens of thousands of them had forwarded the game to friends or sent a letter to an elected representative. As of April 2007, the game has been played more than 2.4 million times by over 1.2 million people worldwide.

 The game has been the focus of debate on its nature and impact. Some academics, interviewed by the BBC on the game, stated that anything that might spark debate over Darfur and issues surrounding is a clear gain for the advocates. The others thought that the game oversimplified a complex situation and thus failed to address the actual issues of the conflict.  The game was also criticized for the sponsorship of mtvU, raising the possibility that the game might seem like a marketing tool for the corporation. The official site does not use the word “game”, but refers to Darfur is Dying as a “narrative based simulation.”

 

 Related Articles

 


Online game playing

October 25, 2010
3 Comments
By  P. B.

There are a lot of servers on the Internet that provide playing games online. The playing is very easy and many users who have only basic knowledge about computers and the Internet can play these games. The most common way of starting to play is to open the Internet and visit the Google page.   Then in the box for searching write two words: online games and Google immediately offers you many servers, e.g. www.onlinegames.net, www.freeonlinegames.com or Czech pages www.super-games.cz etc. Each server proposes many various games of different sorts. There you may find games for boys, girls, kids, most played games, new games, and others. Or you can select games by a subject, i.e. adventure games, sports games, war games, erotic or strategic games, etc.         

Assigning a path for Leviathan

Image by Alpha Auer, aka. Elif Ayiter via Flickr

Many games have own manual how to play, so the second step is to study the manual. Depending on the subject of a game the user must use, for example, the key Right Arrow to go forward, Left Arrow – to go back, PgUp – to go up, Ctrl – to shoot. It is very easy to understand how to play and recognize what is the goal of the game, e.g. to have maximum points, to kill everything that moves or to be the first in the end. These games are rather simple-minded, but some people become too addicted to them trying to improve their best performance. Sometimes they spend hours before the screen every day and don´t have any idea about time.  

I have tried four different servers and about six different games. In my opinion these games are very easy and for me boring, but for younger users or for people who are bored right now the games can be interesting. However, the most important thing (in my view) is that two of tested servers were infected (my computer warned me that the pages are dangerous and can contain malware, spyware or viruses). My friends, who have problems with their computers in this sense, want me to repair their computer – maybe that is the reason why I don’t like playing games online directly on the Internet.

Quake3 + net_server
Image by [Beta] via Flickr

 

On the other side, I have also tried the game Quake 3 (game demo – not through the Internet, but after installing this game on my computer) and I can affirm that it was  pretty interesting.

 

Quake 3 Arena is a really shooting game. There is no other goal than to kill all other players (but in other versions like Team death match or Capture the flag two teams fight against each other). The player can choose the level of demandingness (from easy to hard) and various places. Quake 3 Arena is the mode where the player fights in the Arena against computer controlled bots (Artificial Intelligent fighters). 

The fighters do battle equipped with various weapons as follows:

  • Gauntlet – a basic weapon for very near fight, usually used only when the player does not have other gun;
  • Machinegun – a thin gun, again applied only when a better gun is not in equipment;
  • Shotgun – a weapon for near fight, 1 shoot every 1 second;
  • Grenade Launcher – shoots grenades;
  • Rocket Launcher – a very popular weapon because its usage is very easy and impact is huge; But the flight of a rocket is slow, so the players get used to shooting at the wall or floor because the rocket has big dispersion;
  • Lighting Gun – an electric gun, very effective because can kill the rival in 2 seconds;
  • Rail gun – a weapon for long distance, very accurate, but has short frequency;
  • Plasma Gun – shoots plasma pulse;
  • BFG10K – the most powerful weapon, but the worst-balanced, and for this reason is not often used by players (BFG = Bio Force Gun).

It is important for the players to find and acquire the armor – the maximum is 200 points armor. The armor provides protection, which absorbs 2/3 of damage. Similarly the players can control their health (from the beginning they have 125 points, which make 100%, and can reach maximum 200 points).

Sometimes (depending on the game) additional features are involved – a Battle suit, Haste (makes movement and shooting twice faster within 30 seconds), Invisibility (for 30 seconds), Medkit, Teleporter (the player is moved to a casual place), Regeneration, Flight (during 60 seconds) and so on.  

  

 


Video Games Platforms

October 6, 2010
Leave a Comment
  
Composed by Galina Vitkova

 

Terminology

The term game platform refers to the particular combination of electronic or computer hardware which, in connection with low-level software, allows a video game to run. In general, a hardware platform means a group of compatible computers that can run the same software. A software platform comprises a major piece of software, as an operating system, operating environment, or a database, under which various smaller application programs can be designed to run. Below main platforms of video games are reviewed.   

  

Platforms for PC games 

PC games often require specialized hardware in the user’s computer in order to play, such as a specific generation of graphics processing unit or an Internet connection for online play, although these system requirements vary from game to game. In any case your PC hardware capabilities should meet minimum hardware requirements established for particular PC games. On the other side, many modern computer games allow, or even require, the player to use a keyboard and mouse simultaneously without demanding any additional devices. 

As of the 2000s, PC games are often regarded as offering a deeper and more complex experience than console games. 

 

Video game consoles platform

A video game console is an interactive entertainment computer or modified computer system that produces a video display signal which can be used with a display device to show video games.    

Usually, this system is connected to a common television set or composite video monitor. A composite monitor is any analog video display that receives input in the form of an analog composite video signal through a single cable. The monitor is different from a conventional TV set because it does not have an internal RF (Radio Frequency) tuner or RF converter. However, a user can install an external device that emulates a TV tuner. 

  

Handheld game consoles platform

A handheld game console is a lightweight, portable electronic device of a small size with a built-in screen, games controls and speakers. A small size allows people to carry handheld game consoles and play games at any time or place. 

A One Station handheld console with game

Image via Wikipedia

 The oldest true handheld game console with interchangeable cartridges is the Milton Bradley Microvision issued in 1979. 

Nintendo, with a popular handheld console concept released the Game Boy in 1989, and continues to dominate the handheld console market with successive Game Boy, and most recently Nintendo DS models.  

  

Handheld electronic games platform

In the past decade, handheld video games have currently become a major sector of the video game market. For example, in 2004 sales of portable software titles exceeded $1 billion in the United States. 

The Gizmondo handheld video game unit. United ...

Image via Wikipedia

Handheld electronic games are very small portable devices for playing interactive electronic games, often miniaturized versions of video games. The controls, display and speakers are all a part of a single unit. They usually have displays designed to play one game. Due to this simplicity they can be made as small as a digital watch, and sometimes are. Usually they do not have interchangeable cartridges, disks, etc., or are not reprogrammable.  The visual output of these games can range from a few small light bulbs or a light-emitting diode (LED) lights to calculator-like alphanumerical screens. Nowadays these outputs are mostly displaced by liquid crystal and Vacuum fluorescent display screens. Handhelds were most popular from the late 1970s into the early 1990s. They are both the precursors and inexpensive alternatives to the handheld game console. 

Mobile games platform

A mobile game is a video game played on a mobile phone, smartphone, PDA (Personal Digital Assistant), handheld computer or portable media player.  

The 16 best iPhone games of 2009

Image by docpop via Flickr

The first game that was pre-installed onto a mobile phone was Snake on selected Nokia models in 1997. Snake and its variants have since become the most-played video game on the planet, with over a billion people having played the game. Mobile games are played using the technologies present on the device itself. The games may be installed over the air, they may be side loaded onto the handset with a cable, or they may be embedded on the handheld devices by the original equipment manufacturer (OEM) or by the mobile operator. 

For networked games, there are various technologies in common use, for example, text message (SMS), multimedia message (MMS) or GPRS location identification. 

  

Arcade games 

The Simpsons arcade game by Konami

Image by Lost Tulsa via Flickr

An Arcade game is a coin-operated entertainment machine, usually installed in public businesses such as restaurants, public houses, and video arcades. Most arcade games are redemption games, merchandisers (such as claw crane), video games, or pinball machines. The golden age of video arcade games within the early 1980s was a peak era of video arcade game popularity, innovation, and earnings.     

Furthermore, by the late 1990s and early 2000s, networked gaming via console and computers across the Internet had appeared and replaced arcade games. The arcades also lost their a forefront position of the of new game releases. Having the choice between playing a game at an arcade three or four times (perhaps 15 minutes of play for a typical arcade game), and renting, at about the same price, the exact same game for a video game console, people selected the console. To remain viable, arcades added other elements to complement the video games such as redemption games, merchandisers, games that use special controllers largely inaccessible to home users. Besides, they equiped games with  reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. Moreover, today arcades extended their activities by food service etc. striving to become “fun centers” or “family fun centers”. 

All modern arcade games use solid state electronics and integrated circuits. In the past coin-operated arcade video games generally used custom per-game hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in computer graphics display technology. Recent arcade game hardware is often based on modified video game console hardware or high-end PC components.

References:   http://en.wikipedia.org/

 

 


Contemporary gaming

August 29, 2010
2 Comments
Composed by Galina Vitkova

 

Dear friends of Technical English!

Having finished discussing new features of Windows 7 we were looking for topics that could be proper for studying Technical English. After a relatively short time we have chosen computer games for reasons as follows:

  • This topic may be interesting for a larger amount of people studying and needing English. Effectiveness of studying interesting subjects that fascinate you is, as well-known, much higher;
  • In PC games many general methods and instruments that are commonly used for building software applications and systems are applied, too. It means that the same terminology is used in both cases. Thus studying technical texts about PC games we can significantly enrich our professional vocabulary by those technical terms;
  • Games contribute to development of communication skills and reaction readiness, the interest to a game strengthen your ability to remember or memorize new words and expressions;
  • PC games, in spite of their controversial reputation, provide a good means for relaxation.

Enjoy the text and participate in discussion!

 

Contemporary gaming

Personal Computer games (also known as computer games or PC games) have evolved from the simple graphics and gameplays of early titles like Spacewar to a wide range of more visually advanced titles.

 

Playing Spacewar

Image by Marcin Wichary via Flickr

 

 

Although personal computers only became popular with the development of microprocessors, mainframes and minicomputers, computer gaming has existed since at least the 1960s. The first generation of  PC games were often text adventures or interactive fictions, in which the player communicated with the computer by entering commands through a keyboard. Increasing adoption of the computer mouse, and high resolution bitmap displays allowed to include increasingly high-quality graphical interfaces in new releases. Further improvements to games were made with the introduction of the first sound cards in 1987. These cards allowed IBM PC compatible computers to produce complex sounds using frequency modulation (FM synthesis). Previously those computers had been limited to simple tones and beeps.

 

Xbox 360 Case Mod - mosaic Tomb Raider Legend case

 

By 1996, the rise of Microsoft Windows and success of 3D console titles gave rise to great interest in hardware accelerated 3D graphics on the IBM PC compatible computers, and soon resulted in attempts to produce affordable solutions. Tomb Raider, which was released in 1996, was one of the first shooter games acclaimed for its revolutionary graphics. However, major changes to the Microsoft Windows operating system made many older MS-DOS-based games unplayable on Windows NT, and later, Windows XP without using an emulator. The faster graphics accelerators and improving CPU technology resulted in increasing levels of realism in computer games. During this time, the improvements have allowed developers to increase the complexity of modern game engines. PC gaming currently tends strongly toward improvements in 3D graphics.

Concurrently, many game publishers began to experiment with new forms of marketing. Nowadays episodic gaming is chief among these alternative strategies. This kind of gaming is an adaptation of the older concept of expansion packs, in which game content is provided in smaller quantities but for a proportionally lower price. Titles such as Half-Life 2: Episode One took advantage of the idea, with mixed results rising from concerns for the amount of content provided for the price.

The multi-purpose nature of personal computers often allows users to modify the content of installed games with relative ease in comparison with console cames. The console games are generally difficult to modify without a proprietary means. Furthermore, they are often protected by legal and physical barriers against tampering. Contrary to it, the personal computer version of games may be modified using common, easy-to-obtain software. Users can then distribute their customised version of the game (commonly known as a mod) by any means they choose.

The inclusion of map editors, such as UnrealEd with the retail versions of many games that have been made available online, allow users to create modifications for games smoothly. Moreover, the users may use for this purpose tools that are maintained by the games’ original developers. In addition, companies such as id Software have released the source code to older game engines. Thus they enable creation of entirely new games and major changes to existing ones.

Modding have allowed much of the community to produce game elements that would not normally be provided by the developer of the game. Due to it expanding or modifying normal gameplays to varying degrees has been enabled.

References:   http://en.wikipedia.org/


Hanlin e-Reader V5

July 14, 2010
9 Comments

By P.B. 

  

A week ago I bought my first e-book reader – Hanlin eReader V5. It cost  5.500 Czech crowns (about US$275 US Dollars) including VAT. The set includes a case of leather, e-Reader, USB-cable, power adapter, manual, screwdriver, headphones and 2 GB SD card with 250 e-books in Czech language (by Czech authors J.A. Komensky and K. Capek, then  J. London, etc.)

Here is my general impression after 4 days of owning this device.

Advantages appear to be as follows:

  • it is easy to understand how this product works;
  • the reading is comfortable (good quality of a display) and a user can set a size of letters (it depends on the format of a document –  PDF has 5 possibilities, TXT offers 3 possibilities);
  • usage of e-Ink technology (see a note at the end of this article) enables to read approximately 8000 pages without charging batteries;
  • supports many formats: PDF, DOC, HTML, JPEG, GIF, MP3, ZIP, RAR and some others;
  • the technology supports both reading and  listening modern English books, which  is very good for studying;
  • the 2 GB SD (Security Digital) card is included, but it is possible to use 16 GB SD card (internal memory 384 MB); notice: Secure Digital (SD) is a non-volatile memory card format developed by Panasonic, SanDisk, and Toshiba for portable devices.
  • provides a reasonable size of the e-Reader;
  • supports bookmarks (up to 7 bookmarks in one book, which may be deleted by a user);
  • gives a possibility to go directly to a specific page (for example, page 721);
  • the system remembers last 16 files;
  • besides reading and listening to the books it also provides listening to the music or showing     pictures.

 Disadvantages seem to be as follows:

  •   sometimes problems with special characters (č, š, ř, ž…) arise;
  •   slow functioning (it is better to create a tree of folders with books – it runs faster);
  •   cutting off letters in words at the end of the row varies (e.g. „udělat“ can be cut ud-ělat, uděl-at, etc.);
  •  doesn´t support searching  in your library;
  •  when a user applies functions for reading and listening together, the audio is not always of good quality (I have tried only one source – maybe it has happened by chance);
  • doesn´t support touchscreen;
  •  not all paper books (especially by Czech authors) are in a form of e-book.

 General conclusion

  • it is expensive (but it depends on your considerations);
  • e-Reader can comprise more e-books (depends on a memory card)
  • it is very good for holiday or for usage when traveling by city transport;
  • in the future, when more sophisticated functions are amended, it will bring great conveniences for common people.

 

NOTE: E-Ink is used with electronic paper (e-paper) or electronic ink display, which imitates the appearance of ordinary ink on paper. E-paper reflects ambient light like ordinary paper rather than emitting its own light. An e-paper display can be read in direct sunlight without the image appearing to fade. Moreover, in this case the contrast seems to be the best. On the contrary, in the places that are not well lit the problem with contrast could appear.

Through this new technology the display takes the energy from a battery only when it draws its content. It lasts just a fraction of a second. For the rest of displaying time no feed power is needed. This feature let the e-Reader offer hundreds hours of reading without charging, e.g. with standard Li-pol accumulator it makes about 8000 pages of text.

E-Reader can show videos or photos but the quality of showing them is not good.

 


How XML has been arising

May 14, 2010
3 Comments

Composed by G. Vitkova

Dear colleagues,

The following text about XML is intended to remind basic knowledge about integrating tool over Internet. Our aim is to prepare a platform for a discussion further improvement of users´comfort introduced and implemented in last versions of Windows based on XML. Enjoy the text and discuss. Galina Vitkova

Beginnings

XML (Extensible Markup Language) is a set of rules for encoding documents electronically. XML design goals emphasize simplicity, generality, and usability over the Internet. It issues from SGML (Standard Generalized Markup Language – ISO 8879).

By the mid-1990s some practitioners of SGML gained experience with the then-new World Wide Web, and believed that SGML offered sufficient solutions to WEB functioning. Nevertheless, as the WEB grew, some new problems appeared, which the Web was to face. So, an XML working group of eleven members, supported by an approximately 150-member Interest Group was established. Technical debates took place on the Interest Group mailing list and issues were resolved by consensus or, when that failed, majority vote of the Working Group.

The members of the XML Working Group never met face-to-face; the design was accomplished using a combination of emails and weekly teleconferences. The major design decisions were reached in twenty weeks of intense work between July and November 1996, when the first Working Draft of an XML specification was published. Further design work continued through 1997, and XML 1.0 became a W3C Recommendation on February 10, 1998.

Sources

Most of XML accrues from SGML unchanged. For example, the separation of logical and physical structures (elements and entities), the availability of grammar-based validation (DTDs – Document Type Definition), the separation of data and metadata (elements and attributes), mixed content, the separation of processing from representation (processing instructions), and the default angle-bracket syntax comes from SGML. XML has a fixed delimiter set and adopts Unicode as the document character set.

Other sources of technology for XML were the Text Encoding Initiative (TEI), which defined a profile of SGML for use as a ‘transfer syntax’; HTML, in which elements were synchronous with their resource, the separation of document character set from resource encoding, and the HTTP notion that metadata accompanied the resource rather than being needed at the declaration of a link. The Extended Reference Concrete Syntax (ERCS) project of the SPREAD (Standardization Project Regarding East Asian Documents) followed later.

Versions

There are two current versions of XML. The first (XML 1.0) was initially defined in 1998. It has undergone minor revisions since then, without being given a new version number. Currently it is in its fifth edition, which was published on November 26, 2008. The version is widely implemented and still recommended for general use.

The second (XML 1.1) was initially published on February 4, 2004, the same day as XML 1.0 Third Edition, and is currently in its second edition, as published on August 16, 2006. This version contains features (some contentious) that are intended to make XML easier to use in certain cases. The main changes are to enable the use of line-ending characters used on EBCDIC platforms, and the use of scripts and characters absent from Unicode 3.2. XML 1.1 is not very widely implemented and is recommended for use only by those who need its unique features.

Prior to its fifth edition release, XML 1.0 differed from XML 1.1 in having stricter requirements for characters available for use in element and attribute names and unique identifiers: in the first four editions of XML 1.0 the characters were exclusively enumerated using a specific version of the Unicode standard (Unicode 2.0 to Unicode 3.2.) The fifth edition substitutes the mechanism of XML 1.1, which is more future-proof but reduces redundancy. The approach taken in the fifth edition of XML 1.0 and in all editions of XML 1.1 is that only certain characters are forbidden in names, and everything else is allowed, in order to accommodate the use of suitable name characters in future versions of Unicode. In the fifth edition, XML names may contain characters in the Balinese, Cham, or Phoenician scripts among many others which have been added to Unicode since Unicode 3.2.

Almost any Unicode code point can be used in the character data and attribute values of an XML 1.0 or XML 1.1 document, even if the character corresponding to the code point is not defined in the current version of Unicode. In character data and attribute values, XML 1.1 allows the use of more control characters than XML 1.0. But for “robustness” most of the control characters introduced in XML 1.1 must be expressed as numeric character references. Among the supported control characters in XML 1.1 are two line break codes that must be treated as whitespace. Whitespace characters are the only control codes that can be written directly.

There has been discussion of an XML 2.0, although no organization has announced plans for work on such a project. XML-SW written by one of the original developers of XML, contains some proposals for what an XML 2.0 might look like: elimination of DTDs from syntax, integration of namespaces, XML Base and XML Information Set into the base standard.

References:


Comparative costs of electricity from different sources

April 26, 2010
3 Comments
                                                                            Composed by Galina Vitkova 

Renewable sources have been gaining more and more sympathies of common people and governments too. Their shares have been growing, especially in Europe and Northern America. You can make sure of it looking through Fig. 1 below

Fig. 1 – Shares of renewables in 2005 and 2020

But first of all, renewables are very expensive. Relative costs of generating electricity from different sources, which are shown on the next graph, support it with evidence.

The costs are calculated taking into consideration several internal cost factors. These factors are as follows:

  • Capital costs (including waste disposal and decommissioning costs, especially for nuclear power plants – NPPs) – tend to be low for fossil fuel power stations; high for renewables and nuclear power plants; very high for waste to energy, wave and tidal, photovoltaic (PV) and solar thermal power installations.
  • Operating and maintenance costs – tend to be high for nuclear, coal, and waste-to-energy power stations (fly and bottom ash disposal, emissions clean up, operating steam generators) and low for renewables and oil and gas fired peaking units.
  • Fuel costs – high for fossil fuel and biomass sources, very low for nuclear and renewables, possibly negative for waste to energy power plants.
  • Expected annual hours run – as low as 3% for diesel peakers, 30% for wind, and up to 90% for nuclear power stations.

Comparative costs of electricity produced by different source of primary energy and calculated, using the above mentioned factors, are depicted  in Fig.2. 

Fig. 2 – Comparative costs of electricity

USA Generating costs in May 2008 given in Fig. 3 also support this fact.

Fig. 3 – US Generating costs in 2008

Nonetheless, in long term context the costs should equal according to different forecasts – see, for example Fig. 4.

 

Fig. 4 – Long term cost trends

Besides high costs the other serious problem connected with renewables concerns their intermittence. It means that a wind power installation generate electricity when wind blows and similarly a solar plant produces electricity when the Sun shines. But consumers require and consume electricity when they need it, e.g. in the mornings, evenings i.e. mostly at the time quit different from the time when wind blows or the Sun shines. So, means for energy balancing, which is limited, in order to meet consumers´ demands (see a detailed analysis of the issue in Renewable energy – our downfall? by Ralph Ellis and in posts If we don´t interest in the energy future, we may see its collapse and Is the „green“ energy really free?). Nowadays, some gas power plants or hydro power plants are used to balance the variation … With more intermittent renewables in the electricity grid they will have to do this much more often and situation could become intricate, maybe unsolvable.

The problem is not lack of wind or solar (etc.) energy, it is a fact that at times there may be too much wind or sun. Different operational and economic conflicts will arise, especially at time of low electricity demands. Energy storage (e.g. pumped hydro) and export through new inter-connections could help (for teach-in how serious the situation is see Renewable energy – our downfall?).

References:


Next Page »

    April 2017
    M T W T F S S
    « Jul    
     12
    3456789
    10111213141516
    17181920212223
    24252627282930

    Blog Stats

    • 202,919 hits

    Subscribe with BlogLines

    Translatorsbase

    Dynamic blog-up

    technorati

    Join the discussion about

    Seomoz

    I <3 SEO moz