Meaning “believe, accept as true” is attested by 1926.
Bạn đang xem: I don't buy it
I couldn’t find any evidence to tư vấn this usage in the early decades of the 20th century, và looking at Google Books it appears the expression actually took off from the ’60s.
My questions are:When did the above connotation of the verb “buy” actually come into usage?
Is its meaning origin somewhat connected khổng lồ the spead of TV commercials from the ’50s or ’60s? Was it originally an Am
E or a Br
Improve this question
edited Jun 6, 2018 at 18:14
asked Jun 6, 2018 at 15:00
user 66974user 66974
65.1k2222 gold badges173173 silver badges298298 bronze badges
showroom a bình luận |
2 Answers 2
Sorted by: Reset to mặc định
Highest score (default) Date modified (newest first) Date created (oldest first)
The OED has examples from 1926, 1944, 1949, 1951, & 1952.
The 1926 example is from E. Wallace, More Educated Evans: ""It"s rather early in the day for fairy-tales,’ he said, ‘but I"ll buy this one.’"
It describes the usage as "Chiefly U.S.".
Improve this answer
answered Jun 6, 2018 at 15:38
76k11 gold badge9494 silver badges196196 bronze badges
| Show 3 more comments
I am referring lớn the idiomatic expression “I don’t buy it” meaning I don’t think it is true.
The earliest relevant example of similar usage I ran across on google books was this tidbit from Buds & Blossoms of Piety, With Some Fruit of the Spirit of Love: and Directions to lớn the Divine Wisdom, The Fourth Edition, by Benjamin Antrobus (an early Quaker who preached and was persecuted and imprisoned in London), published in London in the year MDCCXLIII (1743).
This couplet is said khổng lồ be from a letter khổng lồ the author while in prison, from someone known as W.L.:
Let no Dove-sellers in the Temple dwell, There"s Room to buy the Truth, but not khổng lồ sell.
Talk English alludes to the implied contractual or transactional context to be found in the phrase, I don"t buy it:
if you "don"t buy it," then you are not agreeing.
As in business transactions a sale isn"t finalized until both parties agree on the price or terms, likewise a truth or fact presented isn"t "bought" (or sold, either) unless the audience & presenter are in agreement.
Go and sell them the idea.
Okay, but what if they don"t buy it?
Corroborating the transactional context of the term "buy", are the many references to lớn the act of reaching agreement on various đơn hàng for merchandise or favors to lớn be found within the volumes of Cobbett"s Complete Collection of State Trials & Proceedings for High Treason: and Other Crimes and Misdemeanors from the Earliest Period to the Present Time ... From the Ninth Year of the Reign of King Henry, the Second, A.D.1163, lớn ...
"Let us sell him the
Also, in Samuel Johnson"s A Dictionary of the English Language, published in 1773, one of the several meanings for agree:to settle a price between buyer và seller.
In conclusion, "I don"t buy it" means:"I disagree."
...and a reasonable form of the same idiom (but with opposite meaning, to agree -- that is, "to buy", with respect to Truth) may be traced as far back as 1743, lớn Benjamin Antrobus.
Yes, there are sobering risks, but also potential for huge advances. We need to agree some global rules of the game
AI tools lượt thích Chat
GPT are everywhere. It is the combination of computational power & availability of data that has led khổng lồ a surge in AI technology, but the reason models such as Chat
GPT and Bard have made such a spectacular splash is that they have hit our own homes, with around 100 million people currently using them.
This has led lớn a very fraught public debate. It is predicted that a quarter of all jobs will be affected one way or another by AI & some companies are holding back on recruitment to see which jobs can be automated. Fears about AI can move markets, as we saw yesterday when Pearson shares tumbled over concerns that AI would disrupt its business. And, looming above the day-to-day debate are the sometimes apocalyptic warnings about the long-term dangers of AI technologies – often from loud & arguably authoritative voices belonging lớn executives & researchers who developed these technologies.
Last month, science, tech & business leaders signed a letter which asked for a pause in AI development. & this week, the pioneering AI researcher Geoffrey Hinton said that he feared that AI could rapidly become smarter than humanity, & could easily be put to lớn malicious use.
So, are people right to lớn raise the spectre of apocalyptic AI-driven destruction? In my view, no. I agree that there are some sobering risks. But people are beginning to understand that these are socio-technical systems. That is, not just neutral tools, but an inextricable bundle of code, data, subjective parameters and people. AI’s end uses, và the direction it develops, aren’t inevitable. Và addressing the risks of AI isn’t simply a question of “stop” or “proceed”.
Researchers such as Joy Buolamwini, Ruha Benjamin & Timnit Gebru have long highlighted how the context in which AI technologies are produced and used can influence what we get out. Explaining why AI systems can produce discriminatory outcomes, such as allocating less credit to lớn women, failing to recognise black faces, incorrectly determining that immigrant families are at higher risk of committing fraud. These are societal problems we already recognise, và show the need for society to lớn reach a consensus on the right direction for technical innovation, the responsible use of these technologies and the constraints that should be imposed upon them.
‘President Joe Biden wants a bill of rights to lớn cater for people’s rights in the age of AI.’ Photograph: Leah Millis/Reuters
Fortunately, countries around the world are already grappling with these issues. The US, the EU, India, trung quốc and others are rolling out controls và revising regulatory approaches. Meanwhile, global standards are emerging. President Joe Biden wants a bill of rights to cater for people’s rights in the age of AI, và the UN has announced a global digital compact to ensure existing human rights can be upheld in the digital age. Global campaigns such as AUDRi are pushing for the Digital Compact to be effective worldwide.
Companies are aware of these issues as they work on new systems. Open
AI, the company behind Chat
GPT, sums them up pretty well. It recognises that, while a lot has been done lớn root out racism và other forms of hate from Chat
GPT’s responses, manipulation & hallucination (which means producing nội dung that is nonsensical or untruthful, essentially making stuff up) still happen. I am confident that trial and error, plus burgeoning research in this area will help.
Specific & worrying new problems arising from AI technologies also need to lớn be addressed. The biggest risk we are facing is the potential erosion of democracy & “ground truth” that we may face with the proliferation of deep fakes và other AI-generated misinformation. What will happen khổng lồ our public discourse if we are not able lớn trust any sources, faces và facts?
However imperfectly, the Italian privacy watchdog did have good reason lớn put its foot down và temporarily ban Chat
GPT. It was an attempt to lớn make plain that even groundbreaking technologies must be subject khổng lồ the rules, lượt thích all other products. While calling for new laws, we also can start by applying the ones we have already. One of them is the General Data Protection Regulation (GDPR), often bitterly condemned, but the only tool that has upheld the rights of citizens, as well as workers, in the age of AI & the algorithmic management of hiring & firing. Privacy law may need khổng lồ be updated but its role does demonstrate the importance of regulation. Open
AI did make some changes in order for the ban to lớn be lifted.
We should remember that also AI presents great opportunities. For example, an AI tool can identify whether abnormal growths found on CT scans are cancerous. Last year, Deep
Mind predicted the structure of almost every protein so far catalogued by science, cracking one of the great challenges of biology that had flummoxed the world for nearly 50 years.
There is both excitement & fear about this technology. Apocalyptic scenarios of AI similar to lớn those depicted in the Terminator films should not blind us lớn a more realistic và pragmatic vision that sees the good of AI and addresses the real risks. Rules of the game are necessary, và global agreements are vital if we want to move from somewhat mindless development of AI to lớn responsible & democratised adoption of this new power.
Ivana Bartoletti is a privacy and data protection professional, visiting cybersecurity and privacy fellow at Virginia Tech và founder of the Women Leading in AI Network