Festo announces the development of its UR+ certified multi-axis solution for Universal Robotics (UR) cobots. The certified UR system adds up to four axes of motion beyond the UR cobot six axes, and embodies Festo precision, reliability and longevity. This solution from Festo lowers customer engineering time and speeds startup as it arrives ready to install.
Cobot applications are intentionally simple to configure and operate. This multi-axis system is no different. No programming is involved in set up and no additional PLC is required. Multi-axes are configured through the UR HMI. End users simply set position, speed and acceleration on the HMI or, using the URCap toolbar, move the axes in manual mode to configure motion.
The multi-axis system features the Festo Motion Control Package (FMCP) for UR, which is a complete motion control panel for up to four axis motion. The FMCP is fully integrated with the UR cobot control panel and HMI and features a UR safety I/O and communications interface.
In addition to a seventh axis used for linear transfer, the FMCP can control turning tables, automatic storage systems, conveyors and transfer tables, all under the UR umbrella. The FMCP has extra space within the panel for future expansion and brackets for wall mounting, which reduces footprint.
The seventh axis is also used for extending the range of action for a UR cobot in such applications as palletizing and machine tending. Festo EGC belt-driven or ball-screw linear axes comes equipped with a cobot mounting plate. The EGC has an energy chain for cable management and servo motor optimized for performance. Standard EGC axes are available in lengths of up to 8 m with up to 10 m axes available by request.
Festo offers a wide range of grippers such as vacuum, mechanical and the innovative Magswitch magnetic grippers, including the cobot smart gripper E30. Automatic tool changers are also available.
Visit the UR Festo Multi-Axis Solution page on the UR website for solution details and to request additional information.
The post Festo introduces the UR+ certified multi-axis cobot system appeared first on Design World.
Cognitive Scientists Develop New Model Explaining Difficulty in Language Comprehension
Cognitive scientists have long sought to understand what makes some sentences more difficult to comprehend than others. Any account of language comprehension, researchers believe, would benefit from understanding difficulties in comprehension.
In recent years researchers successfully developed two models explaining two significant types of difficulty in understanding and producing sentences. While these models successfully predict specific patterns of comprehension difficulties, their predictions are limited and don’t fully match results from behavioral experiments. Moreover, until recently researchers couldn’t integrate these two models into a coherent account.
A new study led by researchers from MIT’s Department of Brain and Cognitive Sciences (BCS) now provides such a unified account for difficulties in language comprehension. Building on recent advances in machine learning, the researchers developed a model that better predicts the ease, or lack thereof, with which individuals produce and comprehend sentences. They recently published their findings in the Proceedings of the National Academy of Sciences.
The senior authors of the paper are BCS professors Roger Levy and Edward (Ted) Gibson. The lead author is Levy and Gibson’s former visiting student, Michael Hahn, now a professor at Saarland University. The second author is Richard Futrell, another former student of Levy and Gibson who is now a professor at the University of California at Irvine.
“This is not only a scaled-up version of the existing accounts for comprehension difficulties,” says Gibson; “we offer a new underlying theoretical approach that allows for better predictions.”
The researchers built on the two existing models to create a unified theoretical account of comprehension difficulty. Each of these older models identifies a distinct culprit for frustrated comprehension: difficulty in expectation and difficulty in memory retrieval. We experience difficulty in expectation when a sentence doesn’t easily allow us to anticipate its upcoming words. We experience difficulty in memory retrieval when we have a hard time tracking a sentence featuring a complex structure of embedded clauses, such as: “The fact that the doctor who the lawyer distrusted annoyed the patient was surprising.”
In 2020, Futrell first devised a theory unifying these two models. He argued that limits in memory don’t affect only retrieval in sentences with embedded clauses but plague all language comprehension; our memory limitations don’t allow us to perfectly represent sentence contexts during language comprehension more generally.
Thus, according to this unified model, memory constraints can create a new source of difficulty in anticipation. We can have difficulty anticipating an upcoming word in a sentence even if the word should be easily predictable from context — in case that the sentence context itself is difficult to hold in memory. Consider, for example, a sentence beginning with the words “Bob threw the trash…” we can easily anticipate the final word — “out.” But if the sentence context preceding the final word is more complex, difficulties in expectation arise: “Bob threw the old trash that had been sitting in the kitchen for several days [out].”
Researchers quantify comprehension difficulty by measuring the time it takes readers to respond to different comprehension tasks. The longer the response time, the more challenging the comprehension of a given sentence. Results from prior experiments showed that Futrell’s unified account predicted readers’ comprehension difficulties better than the two older models. But his model didn’t identify which parts of the sentence we tend to forget — and how exactly this failure in memory retrieval obfuscates comprehension.
Hahn’s new study fills in these gaps. In the new paper, the cognitive scientists from MIT joined Futrell to propose an augmented model grounded in a new coherent theoretical framework. The new model identifies and corrects missing elements in Futrell’s unified account and provides new fine-tuned predictions that better match results from empirical experiments.
As in Futrell’s original model, the researchers begin with the idea that our mind, due to memory limitations, doesn’t perfectly represent the sentences we encounter. But to this they add the theoretical principle of cognitive efficiency. They propose that the mind tends to deploy its limited memory resources in a way that optimizes its ability to accurately predict new word inputs in sentences.
This notion leads to several empirical predictions. According to one key prediction, readers compensate for their imperfect memory representations by relying on their knowledge of the statistical co-occurrences of words in order to implicitly reconstruct the sentences they read in their minds. Sentences that include rarer words and phrases are therefore harder to remember perfectly, making it harder to anticipate upcoming words. As a result, such sentences are generally more challenging to comprehend.
To evaluate whether this prediction matches our linguistic behavior, the researchers utilized GPT-2, an AI natural language tool based on neural network modeling. This machine learning tool, first made public in 2019, allowed the researchers to test the model on large-scale text data in a way that wasn’t possible before. But GPT-2’s powerful language modeling capacity also created a problem: In contrast to humans, GPT-2’s immaculate memory perfectly represents all the words in even very long and complex texts that it processes. To more accurately characterize human language comprehension, the researchers added a component that simulates human-like limitations on memory resources — as in Futrell’s original model — and used machine learning techniques to optimize how those resources are used — as in their new proposed model. The resulting model preserves GPT-2’s ability to accurately predict words most of the time, but shows human-like breakdowns in cases of sentences with rare combinations of words and phrases.
“This is a wonderful illustration of how modern tools of machine learning can help develop cognitive theory and our understanding of how the mind works,” says Gibson. “We couldn’t have conducted this research here even a few years ago.”
The researchers fed the machine learning model a set of sentences with complex embedded clauses such as, “The report that the doctor who the lawyer distrusted annoyed the patient was surprising.” The researchers then took these sentences and replaced their opening nouns — “report” in the example above — with other nouns, each with their own probability to occur with a following clause or not. Some nouns made the sentences to which they were slotted easier for the AI program to “comprehend.” For instance, the model was able to more accurately predict how these sentences end when they began with the common phrasing “The fact that” than when they began with the rarer phrasing “The report that.”
The researchers then set out to corroborate the AI-based results by conducting experiments with participants who read similar sentences. Their response times to the comprehension tasks were similar to that of the model’s predictions. “When the sentences begin with the words ’report that,’ people tended to remember the sentence in a distorted way,” says Gibson. The rare phrasing further constrained their memory and, as a result, constrained their comprehension.
These results demonstrates that the new model out-rivals existing models in predicting how humans process language.
Another advantage the model demonstrates is its ability to offer varying predictions from language to language. “Prior models knew to explain why certain language structures, like sentences with embedded clauses, may be generally harder to work with within the constraints of memory, but our new model can explain why the same constraints behave differently in different languages,” says Levy. “Sentences with center-embedded clauses, for instance, seem to be easier for native German speakers than native English speakers, since German speakers are used to reading sentences where subordinate clauses push the verb to the end of the sentence.”
According to Levy, further research on the model is needed to identify causes of inaccurate sentence representation other than embedded clauses. “There are other kinds of ‘confusions’ that we need to test.” Simultaneously, Hahn adds, “the model may predict other ‘confusions’ which nobody has even thought about. We’re now trying to find those and see whether they affect human comprehension as predicted.”
Another question for future studies is whether the new model will lead to a rethinking of a long line of research focusing on the difficulties of sentence integration: “Many researchers have emphasized difficulties relating to the process in which we reconstruct language structures in our minds,” says Levy. “The new model possibly shows that the difficulty relates not to the process of mental reconstruction of these sentences, but to maintaining the mental representation once they are already constructed. A big question is whether or not these are two separate things.”
One way or another, adds Gibson, “this kind of work marks the future of research on these questions.”
Source Here: news.mit.edu
Professor Michel DeGraff Named a Fellow of the Linguistics Society of America
Professor Michel DeGraff of MIT Linguistics has been elected as a fellow of the Linguistics Society of America (LSA), the highest academic honor within the field of linguistics, in recognition of his dynamic and impactful scholarship in Creole studies with a focus on Haitian Creole (or “Kreyòl,” as it’s called in Haiti).
DeGraff’s scholarship into the history and linguistics of Haitian Creole goes hand-in-hand with his long-standing activism for full recognition of Kreyòl as a perfectly normal language in all sectors of Haitian society, especially in education.
“It’s a really truly deeply appreciated honor for me to have been selected as a fellow of the Linguistic Society of America,” reflects DeGraff, “and to join the ranks of such esteemed friends and colleagues such as Marlyse Baptista, John Baugh, Anne Charity Hudley, Noam Chomsky, Salikoko Mufwene, John Rickford, and so many others whom I so admire, near and far.”
The LSA, founded in 1924, is the premiere organization in supporting and disseminating the scientific study of languages both in academic settings and for the general public. MIT faculty and alumni represent a disproportionately large number of LSA fellows, reflecting the continuing influence of MIT’s linguistics program on the field as a whole. The LSA fellowship has previously been awarded to many of the field’s luminaries based at MIT, including MIT’s Noam Chomsky, Morris Halle, David Pesetsky, Donca Steriade, Irene Heim, Kai von Fintel, and Sabine Iatridou.
Founded in 1961, the Graduate Program in Linguistics at MIT quickly became a leading center for research on formal models of human-language phonology, morphology, and syntax. Today, MIT graduates can be found in many of the leading linguistics departments in the world, providing much of the intellectual community that defines contemporary linguistics. MIT’s Linguistics program has been named the top program globally for multiple years by QS University Rankings.
“I also value this honor as yet another kudo to MIT’s linguistics program,” says DeGraff, “and what I myself have contributed to it as a Haitian linguist whose work defies some of the traditional intellectual boundaries in the field. Indeed, this fellowship doubles as a most valuable appreciation for the often-undervalued work that my MIT-Haiti team and I have been doing at the sweet spot where de-colonial inquiries and advocacy in linguistics, education, and social justice intersect. The discovery of this rich intersectional dynamic would have been impossible without close collaboration and friendships with colleagues both at MIT and in Haiti across the unfortunate North-South divide. At MIT alone, I am particularly grateful to valiant educators at MIT Open Learning, MIT’s Teaching and Learning Lab, plus so many other units in SHASS [the School of Humanities, Arts and Social Sciences], the School of Science, the School of Engineering, and MIT Sloan [School of Management]. In Haiti, our colleagues — true pioneers, really — are too numerous to cite them all!”
Danny Fox, head of MIT Linguistics, celebrated the honor as recognition of DeGraff’s multifaceted, socially-minded approach to his field, as well as recognition of his outstanding scholarship. “It is good to see a professional organization like the LSA promoting scientists not just for their research, but also for the kind of activism that might accompany it: battling prevalent misconceptions about the nature of the world, identifying their detrimental consequences, and fighting for change. Michel has been involved in all these activities, mostly through the MIT-Haiti initiative, which he was instrumental in establishing. We are all very proud.”
DeGraff’s scholarship and activism has had tremendous influence on the linguistic study of Haitian Creole. In 2010, he co-founded the MIT-Haiti Initiative with Vijay Kumar of MIT Open Learning, a project that seeks to advance “development, evaluation, and dissemination of active-learning resources in Kreyòl to help improve education in Haiti. This is part of larger efforts to valorize Haitian culture and identity, and to promote human rights and national sovereignty.” The initiative includes projects such as Platfòm MIT-Ayiti pou yon lekòl tèt an wo, an online database for the curating and sharing of teaching materials in Haitian Creole for learning at all grade levels, which he leads with Professor Haynes Miller (MIT Department of Mathematics) with funding from MIT Abdul Latif Jameel World Education Lab.
DeGraff is also a founding member of Akademi Kreyòl Ayisyen (the Haitian Creole Academy), a state institution launched in 2014 for the promotion of Haitian Creole, as mandated by the 1987 Haitian Constitution.
He views this honor as an opportunity to further highlight and advance the important work of visibility and preservation that motivates his dedication to the field: “This work is crucial in making a better world, especially for communities that are impoverished because of colonial and neo-colonial language barriers that are often made invisible, even among linguists. I plan to use this LSA fellowship as one more bullhorn to keep pushing forward this agenda for social justice through linguistics and education. This fellowship could not be more timely.”
DeGraff will be officially inducted as a fellow on Friday, Jan. 6, 2023, alongside seven other top scholars in the field, representing universities from around the globe.
Garmin Forerunner 255 Series, Forerunner 955 Solar Launched in India: Price, Specifications
Garmin has launched the Forerunner 955 and Forerunner 955 Solar along with Forerunner 255 series in India. The company claims the Forerunner 955 Solar to be world’s first GPS running smartwatch with solar charging. The Power Glass solar charging on Forerunner 955 gives a claimed 50 percent more battery life and also gives athletes up to 49 hours of battery life in G…
Law9 months ago
NFL Draft Winners and Loser: Rare ‘W’ for Lions
Science9 months ago
NASA Astronaut Jessica Watkins to Become First Black Woman to Spend Months in Space
Medical1 year ago
Vaccine Mandates: Health Care Workers Must Get Forced Jabs, Say Medical Groups
Global1 year ago
Poll: Over 1 Billion People Worldwide Unwilling to Get COVID-19 Vaccine, Global Herd Immunity at Risk
Medical1 year ago
Baby Girl Born at 24 Weeks With Feet the Size of Pennies Proves Medical Doctors Wrong
Commerce1 year ago
Report: Department of Commerce Unit Illegally Spied on Americans
Global1 year ago
Opinion: the Global Super-rich Are Taking Joyrides in Space As Our Precious Planet Burns
Science1 year ago
The Connect EZ Family of Device Servers