|Yes, Its Another PPS-Mega Prime!|
vendredi 10 février 2017 23:17
On 9 February 2017, 17:56:42 UTC, PrimeGrid’s PPS Mega Prime Search project found the Mega Prime: 533*2^3362857+1 The prime is 1,012,324 digits long and will enter Chris Caldwell's The Largest Known Primes Database ranked 203rd overall. The discovery was made by Hans-Jürgen Bergelt (Hans-Jürgen Bergelt) of Germany using an Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz with 8GB RAM, running Microsoft Windows 7 Ultimate Edition. This computer took about 1 hour 18 minutes to complete the primality test using LLR. Hans-Jürgen is a member of the SETI.Germany team. The prime was verified on 10 February 2017, 00:06:03 UTC by Steve King (steveking) of the United States using an Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz with 20GB RAM, running Microsoft Windows 10 Professional Edition. This computer took about 1 hour 20 minutes to complete the primality test using LLR. For more details, please see the official announcement.
|Another PPS Mega Prime!|
vendredi 10 février 2017 17:41
On 9 February 2017, 08:57:25 UTC, PrimeGrid’s PPS Mega Prime Search project found the Mega Prime: 619*2^3362814+1 The prime is 1,012,311 digits long and will enter Chris Caldwell's The Largest Known Primes Database ranked 203rd overall. The discovery was made by Daniel Frużyński (Daniel) of Poland using an Intel(R) Xeon(R) CPU E5-2670 @ 2.60GHz with 32GB RAM, running Linux. This computer took about 1 hour 49 minutes to complete the primality test using LLR. Daniel is a member of the BOINC@Poland team. The prime was verified on 9 February 2017, 09:18:53 UTC by Alen Kecic (Freezing) of Germany using an Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz with 16GB RAM, running Microsoft Windows 10 Professional Edition. This computer took about 56 minutes to complete the primality test using LLR. Alen is a member of the SETI.Germany team. For more details, please see the official announcement.
|Sourcefinder Beta - Attempt 2|
Source: theSkyNet POGS - the PS1 Optical Galaxy Survey
jeudi 9 février 2017 08:33
Earlier today we were having some issues with the hard disk size of the sourcefinder server. While I was attempting to increase the disk size, I accidentally terminated the amazon instance causing the server to be destroyed.
After a large amount of facepalming and re-installing, the server is back up with a larger hard disk at the following address: http://126.96.36.199/duchamp/.
Unfortunately all of the forum posts have been deleted along with the server, and there may be a few latent setup issues that I haven't addressed quite yet.
Before accidentally destroying everything, I did copy off some of the bug reports that people had already made in to a Trello board. This board will store all of the current issues that I'm investigating, so you can all keep track of what I'm doing. https://trello.com/b/Y3XkcXMJ/sourcefinder-public. Once I've read a bug report on the forums, I'll update that board with the relevant details and get to work on fixing it.
I also plan on posting a weekly changelog of everything I've managed to fix so far.
I'm uploading a 30GB packet of work units to the server for everyone to work on. I'll get them sent the moment I can.
I'm truly sorry about all of this, and I hope everyone will still be happy enough to participate in sourcefinder.
|Better Late Than Never: 2017 Tour de Primes!|
mercredi 8 février 2017 14:52
PrimeGrid's annual Tour de Primes is underway! Unlike other challenges, there's no trophy for participation. This challenge is all about finding prime numbers. As with last year, there will also be a red jersey -- which will be awarded to whomever finds the largest prime number during the month of February. And, of course, we'll be awarding the green, yellow, and polk-a-dot jerseys as in previous years. For more information, please see Tour de Primes 2017. Good luck everyone!
|First Scientific Results of DENIS in the Biophysical Society 61st Annual Meeting|
mardi 7 février 2017 13:40
You computed the exploratory study to locate the part of the model that was causing the problem. Thanks to your collaboration we were able to find it and propose a solution.
We continue to work to make our heart cell model as realistic as possible, we hope we can continue to give you great news like this in which your effort is rewarded enriching science.
Thank you very much for being by our side and helping us in this and in future works.
|CPDN in 2016 – a look back over the last year|
vendredi 3 février 2017 15:52
With the Paris agreement freshly on everyone’s mind and in the media 2016 started of as a very exciting year for climate science. On a global scale it took slightly unexpected turns but from a scientific point of view 2016 was a year to celebrate in particular for CPDN.
With respect to our knowledge and understanding of the climate system and the interaction between weather and climate we, the climate science community, made huge progress to no small degree thanks to CPDN, the teams of academic researchers from partner groups around the world but most importantly the volunteers. Without you, the volunteers, there would be no very large ensemble simulations of possible weather and without that our ability to undertake research about rare and extreme weather events would be greatly limited.
Moving into 2017 we, the climateprediction.net and weather@home teams would like to take this oppurtunity to provide a very brief summary of activities and successes over the past 12 months to show what your continued engagement with us has led to AND of course say a big fat thank you to all of the volunteer community, without whom none of this would be possible.
CPDN is unique in providing large ensembles that enable us to simulate statistics of extremely rare events hence the main focus of our work has been on extreme weather and in particular its attribution to external climate drivers.
And the media did register our efforts and reported on it broadly with, by and large, great scientific accuracy.
Apart from providing attribution information when it is needed most the team did a lot of work on the methodological development of extreme event attribution methods using CPDN data and published this in the peer reviewed literature (1-9).
A particular highlight of all these publications is the proof of concept paper on real-time attribution by Karsten Haustein et al.
and the first end-to-end attribution study, from the atmospheric circulation to inundated properties ever in Schaller et al.
The team did not only look at specific events however but also published a number of conceptual papers on attribution as a science, CPDN as a unique capability and climate modelling in general (10-15).
Many of these scientific publications are the result of collaborations with scientists around the world in the international teams of our research projects that make up the science teams of CPDN.
EUCLEIA, a european project that ended this year, did not only explore many of the challenges and limitations of extreme event attribution but in particular fostered and strengthened a scientific community that will live on in other projects for the coming years.
With WWA, CPDN has been part of the first science team ever to provide real-time event attribution but with it’s new 2016 spin-off RRA, WWA and the EUCLEIA legacy will generate a global community and enable in particular scientists from developing countries to become active members of this community. The ground work to make this possible not only came from WWA but also from the NERC funded CPDN project ACE-Africa which ended in 2016. A main achievement of this project beyond the scientific findings is to provide the necessary climate model simulations to explore the impacts of climate change under 1.5 and 2 degrees.
New and unique model simulations have also been made available through the MaRIUS project under which CPDN created a very large ensemble of possible weather and extreme weather in Europe from the beginning of the 20th century up to the end of the 21st.
A different focus of the world has another new project largely relying on CPDN’s modelling infrastructure, LOTUS. A collaboration with the University of Edinburgh, the UK Met Office and various Chinese universities.
Modelling and infrastructure
How have we actually produced all this science in the past year? Through the efforts of you our volunteers and the moderating team for which we particularly thankful, projects within the program have submitted 582k workunits providing a total of 7557 CPU years of computational resource. We have had returned ~384k workunits during the year which is 20Million model years simulated! We have also been considering how we maximise the value of data that is generated during the project. To facilitate these we will be focusing on data curations, publishing and reuse ensuring that the data you, the volunteers have generated is available to all.
At the beginning of the project we only deposited data within Oxford or Rutherford laboratory, expanding to both Oregon and Tasmania as further projects were funded. Within 2016 this has increased with the commissioning of further upload servers around the globe including Mexico, South Korea and India. We have strived to rationalise the upload servers moving towards a common deployment mechanism for them all. This will allow us to more rapidly deal with problems by redeploying servers on timescales relevant to allow continued uninterrupted operations in the case of infrastructure problems.
We have also been investigating how we may support different types of project who require results with a more accurate definition of when they may be able to return results or with urgent computing requirements. This has included investigations of the use of the cloud, with significant grants from AWS to allow us to do proof of concept development on tiering resources attached to BOINC projects (16-17).
The two snapshots representing our longstanding partnerships give an idea the possibilities arising from new partnerships in Mexico, South Korea, India, Kenya and Ethiopia in 2017 and beyond.
2016 partner snapshot
During 2016, the Oregon State University team made progress on three experiments. The first attempts to improve regional climate model simulations of both the climate and vegetation of the western US. A central part of this experiment is exploring the sensitivity of energy fluxes, water transport, and vegetation distribution to model parameters. The second experiment investigates future forest health by looking at projections of climatic forest stressors into the mid-21st century.
Lastly, the third experiment asks this specific question: Did anthropogenic greenhouse gases increase the probability of major bark beetle outbreaks in western North America during the first decade of the 21st century? A warmer and drier growing season can reduce the vigor of trees increasing their susceptibility to insects and in recent years bark beetles killed many white bark pine trees throughout the western US and British Columbia.
Results from prior experiments were also published this year. These include studies that explored the role of anthropogenic greenhouse gases in the Central US drought of 2012 (8) and the 2015 “snow drought” of the US west coast states (2), as well as our first looks at the future climate of the western US as simulated by weather@home (9,18).
The weather@home regional climate modelling system for Australia and New Zealand has been used for a number of different experiments in 2016. These include:
In total, more than 100,000 years of simulations have been completed and most have been analyzed.
Significant outputs during the year include publication of the paper describing the weather@home ANZ modelling system and the evaluation of its performance (10). Two papers analyzing extreme events in 2015 in Australia using the 2015 simulations were published in the 2016 Bulletin of the American Meteorological Society supplement on Explaining Extreme Events. These examined the record high temperatures in October 2015 in southeast Australia and the record low rainfall in Tasmania in October 2015 (20,21).
Mitchell Black also completed and submitted his PhD thesis in October 2016, which used all these simulations. Examiners’ reports have been received and recommend minor revisions only. Mitch was the key person involved in setting up and running the w@h ANZ experiments for the last three years and he has now moved to a postdoctoral research position in CSIRO. Andrew King at Melbourne University and Sue Rosier at NIWA in New Zealand will take greater roles in setting up and running w@h ANZ experiments in 2017.
|Summer school – How do Global Teleconnections Impact on Climate?|
jeudi 2 février 2017 22:25
Organized by the Potsdam Institute for Climate Impact Research (PIK), the GOTHAM Summer School (18th-22nd September 2017) will train young scientists on a unique combination of interdisciplinary scientific topics and tools relevant for understanding teleconnections and their role in causing extreme weather events. Professor Wallom and Dr Sparrow will be training attendees on data management skills along with how to use CPDN within their scientific experiments.
Teleconnections are defined by the American Meteorological Society as “a linkage between weather changes occurring in widely separated regions of the globe”. GOTHAM is a new project involving CPDN, that aims to identify the relative impact of different teleconnections (remote drivers) on regional climate and extreme weather events.
The school, this year themed on Global Teleconnections in the Earth’s Climate System – Processes, Modelling and Advanced Analysis Methods, comprises lectures as well as tutorial sessions by some of the world’s leading experts in this field.
Specific topics include:
• Global consequences of extreme El Niños
• Mid-latitude weather extremes and their drivers
• Stratosphere dynamics
• South and East Asian monsoon systems.
• Interactions between global teleconnection patterns
• Data management skills
• New methods of teleconnections identification.
The Summer School is intended to host 25 young researchers working in relevant topical areas, both from GOTHAM partners and external institutes.
Application processes to be announced soon at the official website of the Summer School.
jeudi 2 février 2017 08:16
Another our article was published:
|Migration To LHC@home - No New Tasks Here|
lundi 30 janvier 2017 11:40
As part of the ongoing consolidation effort, the Theory application has been added to the LHC@home project. Therefore, please could everyone switch to that project to run the Theory application. The plan is to stop sending new Theory tasks to this project on Wednesday this week.
Thanks to everyone for their continued contribution.
|Researchers Reunite with World Community Grid to Smash Childhood Cancer|
Source: World Community Grid News and Updates
dimanche 29 janvier 2017 16:25
The Help Fight Childhood Cancer project made a breakthrough discovery when they uncovered several potential drug candidates to fight neuroblastoma. Today, we are proud to announce that the project's lead scientist has assembled an international team to fight even more childhood cancers.