Go back to the enewsletter A sistership of the ul

first_imgGo back to the enewsletterA sister-ship of the ultra-luxury Silver Muse has been ordered by Silversea Cruises with European shipbuilder Fincantieri, to be delivered in November 2021.To be named Silver Dawn, the signing of the new $507-million vessel comes weeks after Silversea Cruises finalised a contract with Fincantieri for the construction of Silver Moon, another sister ship to Silver Muse, which is due to be delivered in 2020. Dawn will benefit from the same sense of intimacy and spacious all-suite accommodation options that characterise Silversea vessels.“Following the extraordinary success of Silver Muse, we are delighted to announce Silver Dawn as the 11th ship to join the Silversea fleet,” Silversea Chairman, Manfredi Lefebvre d’Ovidio said.“Silver Dawn will bear the same hallmarks of quality that guests currently enjoy on our six-star ships. It was my father’s dream to grow Silversea to at least a 12-ship fleet; today, we are one step closer to fulfilling his vision,” Lefebvre d’Ovidio added.Fincantieri’s CEO Giuseppe Bono said, “It is a great satisfaction for our Group to see an ambitious project like Silver Muse establish itself on the market and get the highest appreciation from an exclusive and demanding customer like Silversea, that today confirms its trust in us.”Since 1990, Financtieri has built 82 cruise ships and has another 44 vessels either in production or in the design phase.Go back to the enewsletterlast_img read more

What artificial brains can teach us about how our real brains learn

first_img Email Studying the human mind is tough. You can ask people how they think, but they often don’t know. You can scan their brains, but the tools are blunt. You can damage their brains and watch what happens, but they don’t take kindly to that. So even a task as supposedly simple as the first step in reading—recognizing letters on a page—keeps scientists guessing.Now, psychologists are using artificial intelligence (AI) to probe how our minds actually work. Marco Zorzi, a psychologist at the University of Padua in Italy, used artificial neural networks to show how the brain might “hijack” existing connections in the visual cortex to recognize the letters of the alphabet, he and colleagues reported last month in Nature Human Behaviour. Zorzi spoke with Science about the study and about his other work. This interview has been edited for brevity and clarity.Q: What did you learn in your study of letter perception? Psychologists are simulating neural networks to understand how we learn. Click to view the privacy policy. Required fields are indicated by an asterisk (*) What artificial brains can teach us about how our real brains learn Sign up for our daily newsletter Get more great content like this delivered right to you! Countrycenter_img Country * Afghanistan Aland Islands Albania Algeria Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, the Democratic Republic of the Cook Islands Costa Rica Cote d’Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People’s Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People’s Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Martinique Mauritania Mauritius Mayotte Mexico Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Norway Oman Pakistan Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Qatar Reunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Vietnam Virgin Islands, British Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe By Matthew HutsonSep. 29, 2017 , 3:10 PM DIUNO/ISTOCKPHOTO.COM A: We first trained the model on patches of natural images, of trees and mountains, and then this knowledge becomes a vocabulary of basic visual features the network uses to learn about letter shapes. This idea of “neural recycling” has been around for some time, but as far as I know this is the first demonstration where you actually gained in performance: We saw better letter recognition in a model that trained on natural images than one that didn’t. Recycling makes learning letters much faster compared to the same network without recycling. It gives the network a head start.Q: How does the training work?A: It uses “unsupervised” learning. After pretraining on the natural images, we feed the neural network unlabeled images of letters. The goal is simply to build an internal model of the data, to find the latent structure. It’s called “generative” because it’s generating patterns from the top down. It uses the knowledge it has learned to interpret the new incoming sensory information.Later, a simpler algorithm learns to put letter labels on that network’s outputs. This one uses “supervised” learning—we tell it when it’s right and wrong—but most of the work was done by the unsupervised algorithm.Q: Why focus on unsupervised learning, which is much less common in AI?A: With supervised learning, you are assuming that you have a teacher providing the correct label at each learning event. Think about how we humans learn. This very rarely happens.Supervised learning is a feed-forward, bottom-up approach, unlike the top-down approach of unsupervised learning. There are a lot of feedback connections in the brain. Moreover, there is intrinsic activity in the brain, which is one of the more interesting findings of last 20 years or so in neuroimaging. It’s not generated by sensory stimuli. Intrinsic activity can only come from activating neurons in high layers and then propagating this activity back and forth around the network. It can be described as a form of “dreaming” or “imagery.” When combined with sensory activity, top-down feedback leads to interpretation of the input. For example, if a written word is partially blocked, readers can fill in what they don’t see based on what they expect.The other advantage of unsupervised learning is that since there is no assigned task, knowledge is not tied to a specific application. It’s easy to learn a new task by using this higher-level knowledge. An example is that learning what numbers mean is later applied to learning arithmetic.Q: The part of your network trained on natural images was still more responsive to images of real letters versus made-up ones. Does that mean real letters somehow resemble nature? A: Yes, this is one explanation. There’s this hypothesis that has been around for some time that the shapes of symbols across all writing systems have been culturally selected to better match the statistics of our visual environment. You can think about this in terms of the type of shapes needed to better suit brains trained on nature.Q: What else have you learned about human cognition?A: We know that babies and animals can compare numbers of objects even without labels. We found that deep unsupervised learning on images containing different numbers of objects yields this visual number sense in a neural network. It was the first study using deep learning for cognitive modeling.With neural networks, you have a learning algorithm. You can try to map the learning trajectory of the network onto human developmental data. Take something like learning to read. If you have a computer model that learns to read, you may also try to understand atypical learning, as in dyslexia.Q: What have you found about dyslexia?A: There’s a huge debate. What is the core deficit? People have looked at phonological, visual, and attentional deficits. We tested these hypotheses in a computer model of reading development. In a study that has not been published, we observed that if you don’t assume that dyslexia is caused by more than one deficit, there’s no way to explain the diversity in real dyslexic children. Where this approach is going is to try to build personalized models of individuals and use the simulations to predict the outcomes of interventions.Q: Could simulating the brain like this also improve AI?A: I think so. Bringing in more constraints from the information we have about the brain and how people learn can give us some new ideas on how to explore new learning solutions.last_img read more