Composed by Galina Vitkova
The Renewables-Grid-Initiative (RGI) promotes effective integration of 100% electricity produced from renewable energy sources.
I do not believe in this statement RGI. I am sure that it is impossible from technical and technological points of view. Simply remind the very low share of renewables in entire production of world electricity (3% without hydroelectricity), very high investment costs and very high prices of electricity produced from renewables nowadays.
Concerns about climate and energy security (especially, in the case of nuclear power plants) are reasons supporting the efforts for a quick transformation towards a largely renewable power sector. The European emissions reduction targets to keep temperature increase below 2°C require the power sector to be fully decarbonised by 2050. Large parts of society demand that the decarbonisation is achieved predominantly with renewable energy sources.
Renewables advocates do not speak much about real solutions of real greatly complex problems of renewable sources. Very often they are not aware of them. Even if renewable energy technologies are now established and appreciated by officials and green activists as a key means of producing electricity in a climate and environment friendly way, many crucial problems remain unsolved. Additional power lines, which are needed for transporting electricity from new renewable generation sites to users, raise negative impact on the environment, including biodiversity, ecosystems and the landscape. Furthermore, electricity surpluses, produced by renewables when electricity consumption is very low, causes enormous problems with storage of these surpluses. Besides, there are serious problems with dispatch controlling of a power system with the great penetration (see Variability and intermittency of wind energy in Number 31 – Giving a definition / July 2011) of renewables. On the whole, three the most important problems are waiting to be solved and each of them demands massive investments:
Thus, concerns about the impacts of renewables integration in European power systems need to be carefully studied, fully understood and addressed.
Let us closely consider the issues of building new transmission lines. In the coming decade thousands of kilometers of new lines should be built acrossEurope. Renewable energy sources are abundant and vary, but they are mostly available in remote areas where demand is low and economic activities infrequent. Therefore, thorough strategic planning is required to realise a new grid infrastructure that meets the electricity needs of the next 50-70 years. The new grid architecture is supposed to enable the integration of all renewable energy sources – independently from where and when they are generated – to expand the possibility for distributed generation and demand-side management.
Grid expansion is inevitable but often controversial. The transmission system operators (TSOs) need to accommodate not only the 2020 targets but also to prepare for the more challenging full decarbonisation of the power sector by 2050. The non-governmental organisations (NGO Global Network) community is still not united with respect to supporting or opposing the grid expansion. A number of technical, environmental and health questions need to be addressed and clarified to improve a shared understanding among and across TSOs and NGOs. RGI is trying to bring together cooperating TSOs and NGOs.
The grid expansion could be accomplished by means of overhead lines and underground cables. Both of them may transmit alternative current (AC) and direct current (DC). In the past it was relatively easy to select between lines and cables:
Cables mainly used in the grid for shorter distances mostly due to being more expensive and shorter technical lifetime (50% of overhead lines) whereas overhead lines were used in another cases. Nowadays the situation is more complex since more options and more parameters should be considered. In the future cables will prospectively be even more utilised as development is going towards higher power levels.
Cables have higher public acceptance because of their lower disturbance of natural scenery, lower electromagnetic radiation, avoidance of wildlife, higher weather tolerance. The overhead lines unfortunately disturb the scenery and seriously influence wildlife and protected areas.
The grid development for expanding the renewables by means of overhead lines endangers bird populations inEurope. High and large-scale bird mortality from aboveground power lines progresses due to:
And that all makes up a significant threat to birds and other wildlife. For these reasons Standards to protect birds (Habitats and Birds Directives) are being worked out.
Moreover, the European Commission is currently working on a new legislation to ensure that the energy infrastructure needed for implementing the EU climate and energy targets will be built in time.
Composed by Galina Vitkova
Fusion power is power generated by nuclear fusion processes. In fusion reactions two light atomic nuclei fuse together to form a heavier nucleus. During the process a comparatively large amount of energy is released.
The term “fusion power” is commonly used to refer to potential commercial production of usable power from a fusion source, comparable to the usage of the term “steam power”. Heat from the fusion reactions is utilized to operate a steam turbine which in turn drives electrical generators, similar to the process used in fossil fuel and nuclear fission power stations.
Fusion power has significant safety advantages in comparison with current power stations based on nuclear fission. Fusion only takes place under very limited and controlled conditions So, a failure of precise control or pause of fueling quickly shuts down fusion power reactions. There is no possibility of runaway heat build-up or large-scale release of radioactivity, little or no atmospheric pollution. Furthermore, the power source comprises light elements in small quantities, which are easily obtained and largely harmless to life, the waste products are short-lived in terms of radioactivity. Finally, there is little overlap with nuclear weapons technology.
Fusion powered electricity generation was initially believed to be readily achievable, as fission power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections which were extended by several decades. More than 60 years after the first attempts, commercial fusion power production is still believed to be unlikely before 2040.
The tokamak (see also Number 29 – Easy such and so / April 2011, Nuclear power – tokamaks), using magnetic confinement of a plasma, dominates modern research. Very large projects like ITER (see also The Project ITER – past and present) are expected to pass several important turning points toward commercial power production, including a burning plasma with long burn times, high power output, and online fueling. There are no guarantees that the project will be successful. Unfortunately, previous generations of tokamak machines have revealed new problems many times. But the entire field of high temperature plasmas is much better understood now than formerly. So, ITER is optimistically considered to meet its goals. If successful, ITER would be followed by a “commercial demonstrator” system. The system is supposed to be similar in purpose to the very earliest power-producing fission reactors built in the period before wide-scale commercial deployment of larger machines started in the 1960s and 1970s.
Stellarators, which also use magnetic confinement of a plasma, are the earliest controlled fusion devices. The stellator was invented by Lyman Spitzer in 1950 and built the next year at what later became the Princeton Plasma Physics Laboratory. The name “stellarator” originates from the possibility of harnessing the power source of the sun, a stellar object.
Stellarators were popular in the 1950s and 60s, but the much better results from tokamak designs led to their falling from favor in the 1970s. More recently, in the 1990s, problems with the tokamak concept have led to renewed interest in the stellarator design, and a number of new devices have been built. Some important modern stellarator experiments are Wendelstein, in Germany, and the Large Helical Device, inJapan.
Inertial confinement fusion (ICF) is a process where nuclear fusion reactions are initiated by heating and compressing a fuel target, typically in the form of a pellet. The pellets most often contain a mixture of deuterium and tritium.
To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons, almost all ICF devices to date have used lasers. The aim of ICF is to produce a state known as “ignition”, where this heating process causes a chain reaction that burns a significant portion of the fuel. Typical fuel pellets are about the size of a pinhead and contain around 10 milligrams of fuel. In practice, only a small proportion of this fuel will undergo fusion, but if all this fuel were consumed it would release the energy equivalent to burning a barrel of oil.
To date most of the work in ICF has been carried out in Franceand the United States, and generally has seen less development effort than magnetic approaches. Two large projects are currently underway, the Laser Mégajoule in France and the National Ignition Facility in theUnited States.
Reference: Wikipedia, the free encyclopedia http://en.wikipedia
Composed by Galina Vitkova
Tactical media is a form of media activism that uses media and communication technologies for social movement and privileges temporary, hit-and-run interventions in the media sphere. Attempts to spread information not available by mainstream news are also called media activism. The term was first introduced in the mid-1990s in Europe and the United States by media theorists and practitioners. Since then, it has been used to describe the practices of a vast array of art and activist groups. Tactical media also shares something with the hacker subculture, and in particular with software and hardware hacks which modify, extend or unlock closed information systems and technologies.
Video games have opened a fully new approach for tactical media artists. This form of media allows a wide range of audiences to be informed of a specific issue or idea. Some examples of games that touch on Tactical Media are Darfur is Dying and September 12. One example of a game design studio that works in tactical media is TAKE ACTION games (TAG). The video game website www.newsgaming.com greatly embodies the idea of tactical media in video games. Newsgaming coins this name as a new genre that brings awareness of current news related issues based on true world events apposed to fantasy worlds that other video games are based upon. It contributes to emerging culture that is largely aimed at raising awareness about important matters in a new and brilliant approach.
Other examples of tactical media within video games include The McDonald’s Game. The author of this game takes information from the executive officers of McDonalds and giving it to the public by informing people about how McDonalds does its business and what means it uses to accomplish it.
Chris Crawford’s Balance of the Planet, made in 1990, is another example of tactical media, in which the game describes environmental issues.
The game begins with the player choosing a member of a Darfuri family that has been displaced by the conflict. The first of the two modes of the game begins with the player controlling the family member, who travelled from the camp to a well and back, while dodging patrols of the janjaweed militia. If captured, the player is informed what has happened to his/her selected character and asked to select another member of the family and try again. If the water is successfully carried back to the camp, the game switches into its second mode – a top down management view of the camp, where the character must use the water for crops and to build huts. When the water runs out the player must return to the water fetching level to progress. The goal is to keep the camp running for seven days.
Reception of the game
The game has been reported by mainstream media sources such as The Washington Post, Time Magazine, BBC News and National Public Radio. In an early September 2006 interview, Ruiz stated that it was difficult to determine success for a game with a social goal, but affirmed that more than 800,000 people had played it 1.7 million times since its release. Moreover, tens of thousands of them had forwarded the game to friends or sent a letter to an elected representative. As of April 2007, the game has been played more than 2.4 million times by over 1.2 million people worldwide.
The game has been the focus of debate on its nature and impact. Some academics, interviewed by the BBC on the game, stated that anything that might spark debate over Darfur and issues surrounding is a clear gain for the advocates. The others thought that the game oversimplified a complex situation and thus failed to address the actual issues of the conflict. The game was also criticized for the sponsorship of mtvU, raising the possibility that the game might seem like a marketing tool for the corporation. The official site does not use the word “game”, but refers to Darfur is Dying as a “narrative based simulation.”
There are a lot of servers on the Internet that provide playing games online. The playing is very easy and many users who have only basic knowledge about computers and the Internet can play these games. The most common way of starting to play is to open the Internet and visit the Google page. Then in the box for searching write two words: online games and Google immediately offers you many servers, e.g. www.onlinegames.net, www.freeonlinegames.com or Czech pages www.super-games.cz etc. Each server proposes many various games of different sorts. There you may find games for boys, girls, kids, most played games, new games, and others. Or you can select games by a subject, i.e. adventure games, sports games, war games, erotic or strategic games, etc.
Many games have own manual how to play, so the second step is to study the manual. Depending on the subject of a game the user must use, for example, the key Right Arrow to go forward, Left Arrow – to go back, PgUp – to go up, Ctrl – to shoot. It is very easy to understand how to play and recognize what is the goal of the game, e.g. to have maximum points, to kill everything that moves or to be the first in the end. These games are rather simple-minded, but some people become too addicted to them trying to improve their best performance. Sometimes they spend hours before the screen every day and don´t have any idea about time.
I have tried four different servers and about six different games. In my opinion these games are very easy and for me boring, but for younger users or for people who are bored right now the games can be interesting. However, the most important thing (in my view) is that two of tested servers were infected (my computer warned me that the pages are dangerous and can contain malware, spyware or viruses). My friends, who have problems with their computers in this sense, want me to repair their computer – maybe that is the reason why I don’t like playing games online directly on the Internet.
On the other side I have also tried the game Quake 3 (game demo – not through the Internet, but after installing this game on my computer) and I can affirm that it was pretty interesting.
Quake 3 Arena is a really shooting game. There is no other goal than to kill all other players (but in other versions like Team death match or Capture the flag two teams fight against each other). The player can choose the level of demandingness (from easy to hard) and various places. Quake 3 Arena is the mode where the player fights in the Arena against computer controlled bots (Artificial Intelligent fighters).
The fighters do battle equipped with various weapons as follows:
It is important for the players to find and acquire the armor – the maximum is 200 points armor. The armor provides protection, which absorbs 2/3 of damage. Similarly the players can control their health (from the beginning they have 125 points, which make 100%, and can reach maximum 200 points).
Sometimes (depending on the game) additional features are involved – a Battle suit, Haste (makes movement and shooting twice faster within 30 seconds), Invisibility (for 30 seconds), Medkit, Teleporter (the player is moved to a casual place), Regeneration, Flight (during 60 seconds) and so on.
The term game platform refers to the particular combination of electronic or computer hardware which, in connection with low-level software, allows a video game to run. In general, a hardware platform means a group of compatible computers that can run the same software. A software platform comprises a major piece of software, as an operating system, operating environment, or a database, under which various smaller application programs can be designed to run. Below main platforms of video games are reviewed.
PC games often require specialized hardware in the user’s computer in order to play, such as a specific generation of graphics processing unit or an Internet connection for online play, although these system requirements vary from game to game. In any case your PC hardware capabilities should meet minimum hardware requirements established for particular PC games. On the other side, many modern computer games allow, or even require, the player to use a keyboard and mouse simultaneously without demanding any additional devices.
As of the 2000s, PC games are often regarded as offering a deeper and more complex experience than console games.
Usually, this system is connected to a common television set or composite video monitor. A composite monitor is any analog video display that receives input in the form of an analog composite video signal through a single cable. The monitor is different from a conventional TV set because it does not have an internal RF (Radio Frequency) tuner or RF converter. However, a user can install an external device that emulates a TV tuner.
A handheld game console is a lightweight, portable electronic device of a small size with a built-in screen, games controls and speakers. A small size allows people to carry handheld game consoles and play games at any time or place.
In the past decade, handheld video games have currently become a major sector of the video game market. For example, in 2004 sales of portable software titles exceeded $1 billion in the United States.
Handheld electronic games are very small portable devices for playing interactive electronic games, often miniaturized versions of video games. The controls, display and speakers are all a part of a single unit. They usually have displays designed to play one game. Due to this simplicity they can be made as small as a digital watch, and sometimes are. Usually they do not have interchangeable cartridges, disks, etc., or are not reprogrammable. The visual output of these games can range from a few small light bulbs or a light-emitting diode (LED) lights to calculator-like alphanumerical screens. Nowadays these outputs are mostly displaced by liquid crystal and Vacuum fluorescent display screens. Handhelds were most popular from the late 1970s into the early 1990s. They are both the precursors and inexpensive alternatives to the handheld game console.
The first game that was pre-installed onto a mobile phone was Snake on selected Nokia models in 1997. Snake and its variants have since become the most-played video game on the planet, with over a billion people having played the game. Mobile games are played using the technologies present on the device itself. The games may be installed over the air, they may be side loaded onto the handset with a cable, or they may be embedded on the handheld devices by the original equipment manufacturer (OEM) or by the mobile operator.
For networked games, there are various technologies in common use, for example, text message (SMS), multimedia message (MMS) or GPRS location identification.
An Arcade game is a coin-operated entertainment machine, usually installed in public businesses such as restaurants, public houses, and video arcades. Most arcade games are redemption games, merchandisers (such as claw crane), video games, or pinball machines. The golden age of video arcade games within the early 1980s was a peak era of video arcade game popularity, innovation, and earnings.
Furthermore, by the late 1990s and early 2000s, networked gaming via console and computers across the Internet had appeared and replaced arcade games. The arcades also lost their a forefront position of the of new game releases. Having the choice between playing a game at an arcade three or four times (perhaps 15 minutes of play for a typical arcade game), and renting, at about the same price, the exact same game for a video game console, people selected the console. To remain viable, arcades added other elements to complement the video games such as redemption games, merchandisers, games that use special controllers largely inaccessible to home users. Besides, they equiped games with reproductions of automobile or airplane cockpits, motorcycle or horse-shaped controllers, or highly dedicated controllers such as dancing mats and fishing rods. Moreover, today arcades extended their activities by food service etc. striving to become “fun centers” or “family fun centers”.
All modern arcade games use solid state electronics and integrated circuits. In the past coin-operated arcade video games generally used custom per-game hardware often with multiple CPUs, highly specialized sound and graphics chips, and the latest in computer graphics display technology. Recent arcade game hardware is often based on modified video game console hardware or high-end PC components.
Dear friends of Technical English!
Having finished discussing new features of Windows 7 we were looking for topics that could be proper for studying Technical English. After a relatively short time we have chosen computer games for reasons as follows:
Enjoy the text and participate in discussion!
Personal Computer games (also known as computer games or PC games) have evolved from the simple graphics and gameplays of early titles like Spacewar to a wide range of more visually advanced titles.
Although personal computers only became popular with the development of microprocessors, mainframes and minicomputers, computer gaming has existed since at least the 1960s. The first generation of PC games were often text adventures or interactive fictions, in which the player communicated with the computer by entering commands through a keyboard. Increasing adoption of the computer mouse, and high resolution bitmap displays allowed to include increasingly high-quality graphical interfaces in new releases. Further improvements to games were made with the introduction of the first sound cards in 1987. These cards allowed IBM PC compatible computers to produce complex sounds using frequency modulation (FM synthesis). Previously those computers had been limited to simple tones and beeps.
By 1996, the rise of Microsoft Windows and success of 3D console titles gave rise to great interest in hardware accelerated 3D graphics on the IBM PC compatible computers, and soon resulted in attempts to produce affordable solutions. Tomb Raider, which was released in 1996, was one of the first shooter games acclaimed for its revolutionary graphics. However, major changes to the Microsoft Windows operating system made many older MS-DOS-based games unplayable on Windows NT, and later, Windows XP without using an emulator. The faster graphics accelerators and improving CPU technology resulted in increasing levels of realism in computer games. During this time, the improvements have allowed developers to increase the complexity of modern game engines. PC gaming currently tends strongly toward improvements in 3D graphics.
Concurrently, many game publishers began to experiment with new forms of marketing. Nowadays episodic gaming is chief among these alternative strategies. This kind of gaming is an adaptation of the older concept of expansion packs, in which game content is provided in smaller quantities but for a proportionally lower price. Titles such as Half-Life 2: Episode One took advantage of the idea, with mixed results rising from concerns for the amount of content provided for the price.
The multi-purpose nature of personal computers often allows users to modify the content of installed games with relative ease in comparison with console cames. The console games are generally difficult to modify without a proprietary means. Furthermore, they are often protected by legal and physical barriers against tampering. Contrary to it, the personal computer version of games may be modified using common, easy-to-obtain software. Users can then distribute their customised version of the game (commonly known as a mod) by any means they choose.
The inclusion of map editors, such as UnrealEd with the retail versions of many games that have been made available online, allow users to create modifications for games smoothly. Moreover, the users may use for this purpose tools that are maintained by the games’ original developers. In addition, companies such as id Software have released the source code to older game engines. Thus they enable creation of entirely new games and major changes to existing ones.
Modding have allowed much of the community to produce game elements that would not normally be provided by the developer of the game. Due to it expanding or modifying normal gameplays to varying degrees has been enabled.
A week ago I bought my first e-book reader – Hanlin eReader V5. It cost 5.500 Czech crowns (about US$275 US Dollars) including VAT. The set includes a case of leather, e-Reader, USB-cable, power adapter, manual, screwdriver, headphones and 2 GB SD card with 250 e-books in Czech language (by Czech authors J.A. Komensky and K. Capek, then J. London, etc.)
Here is my general impression after 4 days of owning this device.
Advantages appear to be as follows:
Disadvantages seem to be as follows:
NOTE: E-Ink is used with electronic paper (e-paper) or electronic ink display, which imitates the appearance of ordinary ink on paper. E-paper reflects ambient light like ordinary paper rather than emitting its own light. An e-paper display can be read in direct sunlight without the image appearing to fade. Moreover, in this case the contrast seems to be the best. On the contrary, in the places that are not well lit the problem with contrast could appear.
Through this new technology the display takes the energy from a battery only when it draws its content. It lasts just a fraction of a second. For the rest of displaying time no feed power is needed. This feature let the e-Reader offer hundreds hours of reading without charging, e.g. with standard Li-pol accumulator it makes about 8000 pages of text.
E-Reader can show videos or photos but the quality of showing them is not good.
Composed by G. Vitkova
The following text about XML is intended to remind basic knowledge about integrating tool over Internet. Our aim is to prepare a platform for a discussion further improvement of users´comfort introduced and implemented in last versions of Windows based on XML. Enjoy the text and discuss. Galina Vitkova
XML (Extensible Markup Language) is a set of rules for encoding documents electronically. XML design goals emphasize simplicity, generality, and usability over the Internet. It issues from SGML (Standard Generalized Markup Language – ISO 8879).
By the mid-1990s some practitioners of SGML gained experience with the then-new World Wide Web, and believed that SGML offered sufficient solutions to WEB functioning. Nevertheless, as the WEB grew, some new problems appeared, which the Web was to face. So, an XML working group of eleven members, supported by an approximately 150-member Interest Group was established. Technical debates took place on the Interest Group mailing list and issues were resolved by consensus or, when that failed, majority vote of the Working Group.
The members of the XML Working Group never met face-to-face; the design was accomplished using a combination of emails and weekly teleconferences. The major design decisions were reached in twenty weeks of intense work between July and November 1996, when the first Working Draft of an XML specification was published. Further design work continued through 1997, and XML 1.0 became a W3C Recommendation on February 10, 1998.
Most of XML accrues from SGML unchanged. For example, the separation of logical and physical structures (elements and entities), the availability of grammar-based validation (DTDs – Document Type Definition), the separation of data and metadata (elements and attributes), mixed content, the separation of processing from representation (processing instructions), and the default angle-bracket syntax comes from SGML. XML has a fixed delimiter set and adopts Unicode as the document character set.
Other sources of technology for XML were the Text Encoding Initiative (TEI), which defined a profile of SGML for use as a ‘transfer syntax’; HTML, in which elements were synchronous with their resource, the separation of document character set from resource encoding, and the HTTP notion that metadata accompanied the resource rather than being needed at the declaration of a link. The Extended Reference Concrete Syntax (ERCS) project of the SPREAD (Standardization Project Regarding East Asian Documents) followed later.
There are two current versions of XML. The first (XML 1.0) was initially defined in 1998. It has undergone minor revisions since then, without being given a new version number. Currently it is in its fifth edition, which was published on November 26, 2008. The version is widely implemented and still recommended for general use.
The second (XML 1.1) was initially published on February 4, 2004, the same day as XML 1.0 Third Edition, and is currently in its second edition, as published on August 16, 2006. This version contains features (some contentious) that are intended to make XML easier to use in certain cases. The main changes are to enable the use of line-ending characters used on EBCDIC platforms, and the use of scripts and characters absent from Unicode 3.2. XML 1.1 is not very widely implemented and is recommended for use only by those who need its unique features.
Prior to its fifth edition release, XML 1.0 differed from XML 1.1 in having stricter requirements for characters available for use in element and attribute names and unique identifiers: in the first four editions of XML 1.0 the characters were exclusively enumerated using a specific version of the Unicode standard (Unicode 2.0 to Unicode 3.2.) The fifth edition substitutes the mechanism of XML 1.1, which is more future-proof but reduces redundancy. The approach taken in the fifth edition of XML 1.0 and in all editions of XML 1.1 is that only certain characters are forbidden in names, and everything else is allowed, in order to accommodate the use of suitable name characters in future versions of Unicode. In the fifth edition, XML names may contain characters in the Balinese, Cham, or Phoenician scripts among many others which have been added to Unicode since Unicode 3.2.
Almost any Unicode code point can be used in the character data and attribute values of an XML 1.0 or XML 1.1 document, even if the character corresponding to the code point is not defined in the current version of Unicode. In character data and attribute values, XML 1.1 allows the use of more control characters than XML 1.0. But for “robustness” most of the control characters introduced in XML 1.1 must be expressed as numeric character references. Among the supported control characters in XML 1.1 are two line break codes that must be treated as whitespace. Whitespace characters are the only control codes that can be written directly.
There has been discussion of an XML 2.0, although no organization has announced plans for work on such a project. XML-SW written by one of the original developers of XML, contains some proposals for what an XML 2.0 might look like: elimination of DTDs from syntax, integration of namespaces, XML Base and XML Information Set into the base standard.
Renewable sources have been gaining more and more sympathies of common people and governments too. Their shares have been growing, especially in Europe and Northern America. You can make sure of it looking through Fig. 1 below
Fig. 1 – Shares of renewables in 2005 and 2020
But first of all, renewables are very expensive. Relative costs of generating electricity from different sources, which are shown on the next graph, support it with evidence.
The costs are calculated taking into consideration several internal cost factors. These factors are as follows:
Comparative costs of electricity produced by different source of primary energy and calculated, using the above mentioned factors, are depicted in Fig.2.
Fig. 2 – Comparative costs of electricity
USA Generating costs in May 2008 given in Fig. 3 also support this fact.
Fig. 3 – US Generating costs in 2008
Nonetheless, in long term context the costs should equal according to different forecasts – see, for example Fig. 4.
Fig. 4 – Long term cost trends
Besides high costs the other serious problem connected with renewables concerns their intermittence. It means that a wind power installation generate electricity when wind blows and similarly a solar plant produces electricity when the Sun shines. But consumers require and consume electricity when they need it, e.g. in the mornings, evenings i.e. mostly at the time quit different from the time when wind blows or the Sun shines. So, means for energy balancing, which is limited, in order to meet consumers´ demands (see a detailed analysis of the issue in Renewable energy – our downfall? by Ralph Ellis and in posts If we don´t interest in the energy future, we may see its collapse and Is the „green“ energy really free?). Nowadays, some gas power plants or hydro power plants are used to balance the variation … With more intermittent renewables in the electricity grid they will have to do this much more often and situation could become intricate, maybe unsolvable.
The problem is not lack of wind or solar (etc.) energy, it is a fact that at times there may be too much wind or sun. Different operational and economic conflicts will arise, especially at time of low electricity demands. Energy storage (e.g. pumped hydro) and export through new inter-connections could help (for teach-in how serious the situation is see Renewable energy – our downfall?).
Subscribe with BlogLines
Pages of the blog