Top 100 CAT 2024 VARC Questions (Most Expected)

Pankaj Rathore

6532

Nov 15, 2024

Top 100 CAT 2024 VARC Questions (Most Expected)

Click the link below to download the top 100 CAT 2024 VARC questions PDF and boost your CAT exam preparation now:-

Top 100 CAT 2024 VARC Questions With Solutions PDF

Preparing for the CAT exam, especially the VARC section from CAT syllabus, can feel overwhelming. To help you get ready, we’ve created a downloadable PDF with the top 100 most likely VARC questions, These questions are crafted to match the CAT exam’s level of difficulty and format, giving you realistic practice on your own schedule.

With these 100 VARC questions, you’ll get a clear idea of the types of comprehension passages, vocabulary, and para jumbles often seen in CAT exam

Benefits of Practicing Top 100 CAT 2024 VARC Questions

  • Complete Practice: The 100 CAT 2024 VARC questions cover key VARC areas, including reading comprehension, para jumbles, and vocabulary.
  • Realistic CAT Prep: These questions are designed to match the challenge level of the actual CAT exam, letting you practice under similar conditions.
  • Flexible Study: Download the PDF to study at your own pace and convenience, without needing a classroom setting.
  • Better Accuracy and Speed: Practicing these questions regularly will help you improve your speed and accuracy, making it easier to complete the section on time

Also Read, CAT Expected Questions 2024, Section-wise Questions PDF

Instruction for set :

Read the passage carefully and answer the following questions :

Einstein talked a lot about God. He invoked him repeatedly in his physics—so much so that his friend, Niels Bohr, once berated him for constantly telling God what he could do. He was “enthralled by the luminous figure” of Jesus.

Details like these that have persuaded millions of religious people around the world that the twentieth century’s greatest physicist was a fellow traveller. They are wrong—as a letter that has just come up for auction underlines. Written in 1952 to the Jewish philosopher Eric Gutkind, who had sent him his book Choose Life: The Biblical Call To Revolt, Einstein does not mince his words. “The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish.”

Yet, that does not mean that the atheists are right to crow, and that Einstein only ever spoke of God idiomatically, meaning nothing more by his frequent references to the divine. Our star witness here is Einstein himself. “I’m not an atheist and I don’t think I can call myself a pantheist,” he once said when asked to define God. “I believe in Spinoza’s God,” he told Rabbi Herbert Goldstein of the Institutional Synagogues of New York, “who reveals himself in the orderly harmony of what exists.” All the finer speculations in the realm of science “spring from a deep religious feeling,” he remarked in 1930. In the order, beauty and intelligibility of creation, he found signs of the ‘God’.

This was not the personal God of the Abrahamic faiths, but nor was it the idiomatic “God” of atheism. Indeed, Einstein could be equally withering on this point. When asked whether there was an inherent antagonism between science and religion, or whether science would ever supersede religion, he was emphatic in his denial. Nor had he any time for deriving morality from science. “Every attempt to reduce ethics to scientific formulae must fail,” he once remarked. There are still people, he remarked at a charity dinner during the War, who say there is no God. “But what really makes me angry is that they quote me for support of such views.” “There are fanatical atheists whose intolerance is of the same kind as the intolerance of the religious fanatics,” he said in 1940.

Einstein, then, offers scant consolation to either party in this debate. His cosmic religion and distant deistic God of cosmic order and elegance fits neither the agenda of religious believers nor that of tribal atheists. As so often during his life, he refused and disturbed the accepted categories. Einstein once famously remarked that to punish him for his contempt for authority, Fate made him an authority himself. As with physics so with religion.

Question 1

Why does the author say that Einstein offers "scant consolation to either party in this debate"?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions :

Einstein talked a lot about God. He invoked him repeatedly in his physics—so much so that his friend, Niels Bohr, once berated him for constantly telling God what he could do. He was “enthralled by the luminous figure” of Jesus.

Details like these that have persuaded millions of religious people around the world that the twentieth century’s greatest physicist was a fellow traveller. They are wrong—as a letter that has just come up for auction underlines. Written in 1952 to the Jewish philosopher Eric Gutkind, who had sent him his book Choose Life: The Biblical Call To Revolt, Einstein does not mince his words. “The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish.”

Yet, that does not mean that the atheists are right to crow, and that Einstein only ever spoke of God idiomatically, meaning nothing more by his frequent references to the divine. Our star witness here is Einstein himself. “I’m not an atheist and I don’t think I can call myself a pantheist,” he once said when asked to define God. “I believe in Spinoza’s God,” he told Rabbi Herbert Goldstein of the Institutional Synagogues of New York, “who reveals himself in the orderly harmony of what exists.” All the finer speculations in the realm of science “spring from a deep religious feeling,” he remarked in 1930. In the order, beauty and intelligibility of creation, he found signs of the ‘God’.

This was not the personal God of the Abrahamic faiths, but nor was it the idiomatic “God” of atheism. Indeed, Einstein could be equally withering on this point. When asked whether there was an inherent antagonism between science and religion, or whether science would ever supersede religion, he was emphatic in his denial. Nor had he any time for deriving morality from science. “Every attempt to reduce ethics to scientific formulae must fail,” he once remarked. There are still people, he remarked at a charity dinner during the War, who say there is no God. “But what really makes me angry is that they quote me for support of such views.” “There are fanatical atheists whose intolerance is of the same kind as the intolerance of the religious fanatics,” he said in 1940.

Einstein, then, offers scant consolation to either party in this debate. His cosmic religion and distant deistic God of cosmic order and elegance fits neither the agenda of religious believers nor that of tribal atheists. As so often during his life, he refused and disturbed the accepted categories. Einstein once famously remarked that to punish him for his contempt for authority, Fate made him an authority himself. As with physics so with religion.

Question 2

Which of the following, if true, undermines the main point of the second paragraph?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions :

Einstein talked a lot about God. He invoked him repeatedly in his physics—so much so that his friend, Niels Bohr, once berated him for constantly telling God what he could do. He was “enthralled by the luminous figure” of Jesus.

Details like these that have persuaded millions of religious people around the world that the twentieth century’s greatest physicist was a fellow traveller. They are wrong—as a letter that has just come up for auction underlines. Written in 1952 to the Jewish philosopher Eric Gutkind, who had sent him his book Choose Life: The Biblical Call To Revolt, Einstein does not mince his words. “The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish.”

Yet, that does not mean that the atheists are right to crow, and that Einstein only ever spoke of God idiomatically, meaning nothing more by his frequent references to the divine. Our star witness here is Einstein himself. “I’m not an atheist and I don’t think I can call myself a pantheist,” he once said when asked to define God. “I believe in Spinoza’s God,” he told Rabbi Herbert Goldstein of the Institutional Synagogues of New York, “who reveals himself in the orderly harmony of what exists.” All the finer speculations in the realm of science “spring from a deep religious feeling,” he remarked in 1930. In the order, beauty and intelligibility of creation, he found signs of the ‘God’.

This was not the personal God of the Abrahamic faiths, but nor was it the idiomatic “God” of atheism. Indeed, Einstein could be equally withering on this point. When asked whether there was an inherent antagonism between science and religion, or whether science would ever supersede religion, he was emphatic in his denial. Nor had he any time for deriving morality from science. “Every attempt to reduce ethics to scientific formulae must fail,” he once remarked. There are still people, he remarked at a charity dinner during the War, who say there is no God. “But what really makes me angry is that they quote me for support of such views.” “There are fanatical atheists whose intolerance is of the same kind as the intolerance of the religious fanatics,” he said in 1940.

Einstein, then, offers scant consolation to either party in this debate. His cosmic religion and distant deistic God of cosmic order and elegance fits neither the agenda of religious believers nor that of tribal atheists. As so often during his life, he refused and disturbed the accepted categories. Einstein once famously remarked that to punish him for his contempt for authority, Fate made him an authority himself. As with physics so with religion.

Question 3

Which of the following can be an appropriate conclusion that the author is trying to make?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions :

Einstein talked a lot about God. He invoked him repeatedly in his physics—so much so that his friend, Niels Bohr, once berated him for constantly telling God what he could do. He was “enthralled by the luminous figure” of Jesus.

Details like these that have persuaded millions of religious people around the world that the twentieth century’s greatest physicist was a fellow traveller. They are wrong—as a letter that has just come up for auction underlines. Written in 1952 to the Jewish philosopher Eric Gutkind, who had sent him his book Choose Life: The Biblical Call To Revolt, Einstein does not mince his words. “The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish.”

Yet, that does not mean that the atheists are right to crow, and that Einstein only ever spoke of God idiomatically, meaning nothing more by his frequent references to the divine. Our star witness here is Einstein himself. “I’m not an atheist and I don’t think I can call myself a pantheist,” he once said when asked to define God. “I believe in Spinoza’s God,” he told Rabbi Herbert Goldstein of the Institutional Synagogues of New York, “who reveals himself in the orderly harmony of what exists.” All the finer speculations in the realm of science “spring from a deep religious feeling,” he remarked in 1930. In the order, beauty and intelligibility of creation, he found signs of the ‘God’.

This was not the personal God of the Abrahamic faiths, but nor was it the idiomatic “God” of atheism. Indeed, Einstein could be equally withering on this point. When asked whether there was an inherent antagonism between science and religion, or whether science would ever supersede religion, he was emphatic in his denial. Nor had he any time for deriving morality from science. “Every attempt to reduce ethics to scientific formulae must fail,” he once remarked. There are still people, he remarked at a charity dinner during the War, who say there is no God. “But what really makes me angry is that they quote me for support of such views.” “There are fanatical atheists whose intolerance is of the same kind as the intolerance of the religious fanatics,” he said in 1940.

Einstein, then, offers scant consolation to either party in this debate. His cosmic religion and distant deistic God of cosmic order and elegance fits neither the agenda of religious believers nor that of tribal atheists. As so often during his life, he refused and disturbed the accepted categories. Einstein once famously remarked that to punish him for his contempt for authority, Fate made him an authority himself. As with physics so with religion.

Question 4

Which of the following is closest to Einstein's conception of God?

Show Answer

Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance.

It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world.

It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it.

Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . .

Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place.

The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

Question 5

According to the author, for Pinker as well as the ancient Greek philosophers, rational thinking involves all of the following EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance.

It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world.

It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it.

Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . .

Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place.

The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

Question 6

The author endorses Pinker’s views on the importance of logical reasoning as it:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance.

It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world.

It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it.

Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . .

Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place.

The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

Question 7

The author mentions Kekulé’s discovery of the structure of benzene and Mozart’s symphonies to illustrate the point that:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance.

It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world.

It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it.

Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . .

Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place.

The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

Question 8

The author refers to the ancient Greek philosophers to:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . .

The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive.

Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . .

Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . .

While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it
has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801-02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.

Question 9

The main difficulty in studying romanticism is the:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . .

The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive.

Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . .

Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . .

While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it
has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801-02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.

Question 10

According to the romantics, aesthetics:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . .

The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive.

Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . .

Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . .

While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it
has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801-02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.

Question 11

Which one of the following statements is NOT supported by the passage?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . .

The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive.

Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . .

Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . .

While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it
has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801-02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.

Question 12

According to the passage, recent studies on romanticism avoid “a single definition, a specific time, or a specific place” because they:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Comprehension:
The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th. Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . .

We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting - in short, terraforming, the word Ghosh uses - has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum.

As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . .

There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy - not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

Question 13

On the basis of information in the passage, which one of the following is NOT a reason for the failure of policies seeking to address climate change?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Comprehension:
The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th. Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . .

We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting - in short, terraforming, the word Ghosh uses - has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum.

As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . .

There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy - not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

Question 14

Which one of the following, if true, would make the reviewer’s choice of the pronoun “who” for Gaia inappropriate?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Comprehension:
The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th. Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . .

We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting - in short, terraforming, the word Ghosh uses - has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum.

As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . .

There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy - not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

Question 15

All of the following can be inferred from the reviewer’s discussion of “The Nutmeg’s Curse”, EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Comprehension:
The biggest challenge [The Nutmeg’s Curse by Ghosh] throws down is to the prevailing understanding of when the climate crisis started. Most of us have accepted . . . that it started with the widespread use of coal at the beginning of the Industrial Age in the 18th century and worsened with the mass adoption of oil and natural gas in the 20th. Ghosh takes this history at least three centuries back, to the start of European colonialism in the 15th century. He [starts] the book with a 1621 massacre by Dutch invaders determined to impose a monopoly on nutmeg cultivation and trade in the Banda islands in today’s Indonesia. Not only do the Dutch systematically depopulate the islands through genocide, they also try their best to bring nutmeg cultivation into plantation mode. These are the two points to which Ghosh returns through examples from around the world. One, how European colonialists decimated not only indigenous populations but also indigenous understanding of the relationship between humans and Earth. Two, how this was an invasion not only of humans but of the Earth itself, and how this continues to the present day by looking at nature as a ‘resource’ to exploit. . . .

We know we are facing more frequent and more severe heatwaves, storms, floods, droughts and wildfires due to climate change. We know our expansion through deforestation, dam building, canal cutting - in short, terraforming, the word Ghosh uses - has brought us repeated disasters . . . Are these the responses of an angry Gaia who has finally had enough? By using the word ‘curse’ in the title, the author makes it clear that he thinks so. I use the pronoun ‘who’ knowingly, because Ghosh has quoted many non-European sources to enquire into the relationship between humans and the world around them so that he can question the prevalent way of looking at Earth as an inert object to be exploited to the maximum.

As Ghosh’s text, notes and bibliography show once more, none of this is new. There have always been challenges to the way European colonialists looked at other civilisations and at Earth. It is just that the invaders and their myriad backers in the fields of economics, politics, anthropology, philosophy, literature, technology, physics, chemistry, biology have dominated global intellectual discourse. . . .

There are other points of view that we can hear today if we listen hard enough. Those observing global climate negotiations know about the Latin American way of looking at Earth as Pachamama (Earth Mother). They also know how such a framing is just provided lip service and is ignored in the substantive portions of the negotiations. In The Nutmeg’s Curse, Ghosh explains why. He shows the extent of the vested interest in the oil economy - not only for oil-exporting countries, but also for a superpower like the US that controls oil drilling, oil prices and oil movement around the world. Many of us know power utilities are sabotaging decentralised solar power generation today because it hits their revenues and control. And how the other points of view are so often drowned out.

Question 16

Which one of the following best explains the primary purpose of the discussion of the colonisation of the Banda islands in “The Nutmeg’s Curse”?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting - land-focused and inward-looking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking - full of movement, border-crossing and south-south interconnection. They are all very different - from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

Question 17

Which one of the following statements is not true about migration in the Indian Ocean world?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting - land-focused and inward-looking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking - full of movement, border-crossing and south-south interconnection. They are all very different - from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

Question 18

On the basis of the nature of the relationship between the items in each pair below, choose the odd pair out:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting - land-focused and inward-looking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking - full of movement, border-crossing and south-south interconnection. They are all very different - from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

Question 19

All of the following statements, if true, would weaken the passage’s claim about the relationship between mainstream English-language fiction and Indian Ocean novels EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting - land-focused and inward-looking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking - full of movement, border-crossing and south-south interconnection. They are all very different - from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

Question 20

All of the following claims contribute to the “remapping” discussed by the passage,
EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Many human phenomena and characteristics - such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things - are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

Question 21

All of the following are advanced by the author as reasons why non-geographers disregard geographic influences on human phenomena EXCEPT their:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Many human phenomena and characteristics - such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things - are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

Question 22

The author criticises scholars who are not geographers for all of the following reasons EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Many human phenomena and characteristics - such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things - are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

Question 23

All of the following can be inferred from the passage EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Many human phenomena and characteristics - such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things - are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

Question 24

The examples of the Inuit and Aboriginal Australians are offered in the passage to show:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. .
. .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

Question 25

Which one of the following has NOT contributed to the growing wolf population in Lozère?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. .
. .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

Question 26

The inhabitants of Lozère have to grapple with all of the following problems, EXCEPT:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. .
. .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

Question 27

Which one of the following statements, if true, would weaken the author’s claims?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. .
. .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

Question 28

The author presents a possible economic solution to an existing issue facing Lozère that takes into account the divergent and competing interests of:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, hunter-gatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

Question 29

We can infer that Sahlins's main goal in writing his essay was to:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, hunter-gatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

Question 30

The author mentions Tanzania’s Hadza community to illustrate:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, hunter-gatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

Question 31

The author of the passage mentions Galbraith’s “The Affluent Society” to:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, hunter-gatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

Question 32

The author of the passage criticises Sahlins’s essay for its:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia.

Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism.

During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . .

I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . .

Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts.

China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system.

In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

Question 33

Which one of the following statements best expresses the paradox of patrimony laws?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia.

Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism.

During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . .

I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . .

Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts.

China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system.

In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

Question 34

It can be inferred from the passage that archaeological sites are considered important by some source countries because they:


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia.

Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism.

During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . .

I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . .

Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts.

China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system.

In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

Question 35

Which one of the following statements, if true, would undermine the central idea of the passage?


Instruction for set :

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia.

Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism.

During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . .

I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . .

Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts.

China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system.

In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

Question 36

From the passage we can infer that the author is likely to advise poor, but archaeologically-rich source countries to do all of the following, EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Comprehension :

Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague.

And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light.

In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them.

And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did.

From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.

Question 37

“In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things . . .”

Which one of the following best expresses the claim made in this statement?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Comprehension :

Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague.

And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light.

In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them.

And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did.

From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.

Question 38

Which one of the following observations is a valid conclusion to draw from the statement, “From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either.”?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Comprehension :

Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague.

And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light.

In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them.

And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did.

From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.

Question 39

Which one of the following statements best describes what the passage is about?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Comprehension :

Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague.

And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light.

In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them.

And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did.

From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.

Question 40

All of the following statements, if false, could be seen as being in accordance with the passage, EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves - most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . .

It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts - chromatophores, iridophores, papillae and leucophores. . . .

[Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . .

Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light - it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . .

Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses.

Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

Question 41

Based on the passage, it can be inferred that camouflaging techniques in an octopus are most dissimilar to those in:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves - most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . .

It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts - chromatophores, iridophores, papillae and leucophores. . . .

[Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . .

Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light - it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . .

Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses.

Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

Question 42

All of the following are reasons for octopuses being “misfits” EXCEPT that they:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves - most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . .

It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts - chromatophores, iridophores, papillae and leucophores. . . .

[Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . .

Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light - it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . .

Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses.

Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

Question 43

Which one of the following statements is not true about the camouflaging ability of Cephalopods?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Octopuses are] misfits in their own extended families . . . They belong to the Mollusca class Cephalopoda. But they don’t look like their cousins at all. Other molluscs include sea snails, sea slugs, bivalves - most are shelled invertebrates with a dorsal foot. Cephalopods are all arms, and can be as tiny as 1 centimetre and as large at 30 feet. Some of them have brains the size of a walnut, which is large for an invertebrate. . . .

It makes sense for these molluscs to have added protection in the form of a higher cognition; they don’t have a shell covering them, and pretty much everything feeds on cephalopods, including humans. But how did cephalopods manage to secure their own invisibility cloak? Cephalopods fire from multiple cylinders to achieve this in varying degrees from species to species. There are four main catalysts - chromatophores, iridophores, papillae and leucophores. . . .

[Chromatophores] are organs on their bodies that contain pigment sacs, which have red, yellow and brown pigment granules. These sacs have a network of radial muscles, meaning muscles arranged in a circle radiating outwards. These are connected to the brain by a nerve. When the cephalopod wants to change colour, the brain carries an electrical impulse through the nerve to the muscles that expand outwards, pulling open the sacs to display the colours on the skin. Why these three colours? Because these are the colours the light reflects at the depths they live in (the rest is absorbed before it reaches those depths). . . .

Well, what about other colours? Cue the iridophores. Think of a second level of skin that has thin stacks of cells. These can reflect light back at different wavelengths. . . . It’s using the same properties that we’ve seen in hologram stickers, or rainbows on puddles of oil. You move your head and you see a different colour. The sticker isn’t doing anything but reflecting light - it’s your movement that’s changing the appearance of the colour. This property of holograms, oil and other such surfaces is called “iridescence”. . . .

Papillae are sections of the skin that can be deformed to make a texture bumpy. Even humans possess them (goosebumps) but cannot use them in the manner that cephalopods can. For instance, the use of these cells is how an octopus can wrap itself over a rock and appear jagged or how a squid or cuttlefish can imitate the look of a coral reef by growing miniature towers on its skin. It actually matches the texture of the substrate it chooses.

Finally, the leucophores: According to a paper, published in Nature, cuttlefish and octopuses possess an additional type of reflector cell called a leucophore. They are cells that scatter full spectrum light so that they appear white in a similar way that a polar bear’s fur appears white. Leucophores will also reflect any filtered light shown on them . . . If the water appears blue at a certain depth, the octopuses and cuttlefish can appear blue; if the water appears green, they appear green, and so on and so forth.

Question 44

Based on the passage, we can infer that all of the following statements, if true, would weaken the camouflaging adeptness of Cephalopods EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process.

Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering.

In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials.

Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity.

Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions.

Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking.

Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical self-reflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

Question 45

We can infer that the author would approve of a more evolved engineering pedagogy that includes all of the following EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process.

Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering.

In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials.

Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity.

Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions.

Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking.

Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical self-reflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

Question 46

All of the following are examples of the negative outcomes of focusing on technical ideals in the medical sphere EXCEPT the:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process.

Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering.

In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials.

Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity.

Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions.

Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking.

Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical self-reflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

Question 47

In this passage, the author is making the claim that:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process.

Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering.

In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials.

Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity.

Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions.

Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking.

Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical self-reflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

Question 48

The author gives all of the following reasons for why marginalised people are systematically discriminated against in technology-related interventions EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined.

It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs - under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . .

Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

Question 49

“Consider the fact that the stock exchange and the black market are both market institutions, one formal one not.” Which one of the following statements best explains this quote, in the context of the passage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined.

It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs - under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . .

Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

Question 50

All of the following inferences from the passage are false, EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined.

It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs - under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . .

Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

Question 51

In the first paragraph of the passage, what are the two “characterisations” that are seen as overlapping but not congruent?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined.

It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs - under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . .

Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

Question 52

Which of the following statements best represents the essence of the passage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . .

The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) language-makers in addition to music-makers — speaking creatures as well as musicking ones.

Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence.

If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period.

This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands.

Question 53

Which one of the following sets of terms best serves as keywords to the passage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . .

The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) language-makers in addition to music-makers — speaking creatures as well as musicking ones.

Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence.

If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period.

This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands.

Question 54

“Think beyond all the qualifications that might trail after this bald statement . . .” In the context of the passage, what is the author trying to communicate in this quoted extract?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . .

The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) language-makers in addition to music-makers — speaking creatures as well as musicking ones.

Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence.

If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period.

This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands.

Question 55

Based on the passage, which one of the following statements is a valid argument about the emergence of music/musicking?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . .

The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) language-makers in addition to music-makers — speaking creatures as well as musicking ones.

Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence.

If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period.

This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands.

Question 56

Which one of the following statements, if true, would weaken the author’s claim that humans are musicking creatures?


Instruction for set :

A set of questions accompanies the passage below. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the worldview of those who were teaching them. The readings, therefore, are something of a disjuncture from the traditional ways of looking at the Indian past. . . .

Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient.

However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example, the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects.

German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years.

It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

Question 57

It can be inferred from the passage that to gain a more accurate view of a nation’s history and culture, scholars should do all of the following EXCEPT:


Instruction for set :

A set of questions accompanies the passage below. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the worldview of those who were teaching them. The readings, therefore, are something of a disjuncture from the traditional ways of looking at the Indian past. . . .

Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient.

However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example, the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects.

German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years.

It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

Question 58

It can be inferred from the passage that the author is not likely to support the view that:


Instruction for set :

A set of questions accompanies the passage below. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the worldview of those who were teaching them. The readings, therefore, are something of a disjuncture from the traditional ways of looking at the Indian past. . . .

Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient.

However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example, the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects.

German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years.

It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

Question 59

In the context of the passage, all of the following statements are true EXCEPT:


Instruction for set :

A set of questions accompanies the passage below. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the worldview of those who were teaching them. The readings, therefore, are something of a disjuncture from the traditional ways of looking at the Indian past. . . .

Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient.

However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example, the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects.

German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years.

It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

Question 60

Which one of the following styles of research is most similar to the Orientalist scholars’ method of understanding Indian history and culture?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency.

In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point, most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants.

In these same decades, America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these immigrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighbourhoods where the migrants were forced to live.

Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

Question 61

Which one of the following sets of words/phrases best encapsulates the issues discussed in the passage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency.

In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point, most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants.

In these same decades, America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these immigrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighbourhoods where the migrants were forced to live.

Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

Question 62

A fundamental conclusion by the author is that:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency.

In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point, most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants.

In these same decades, America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these immigrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighbourhoods where the migrants were forced to live.

Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

Question 63

Which one of the following is not a valid inference from the passage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency.

In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point, most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants.

In these same decades, America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these immigrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighbourhoods where the migrants were forced to live.

Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

Question 64

The author notes that, “At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas.” Which one of the following statements, if true, does not contradict this statement?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 65

Which one of the following sets of words/phrases best serves as keywords to thepassage?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 66

The author claims that, “Part of this bionic convergence is a matter of words”. Which one of the following statements best expresses the point being made by the author?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 67

The author claims that, “The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.”Which one of the following statements best expresses the point being made by the author here?


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Nature has all along yielded her flesh to humans. First, we took nature’s materials as food, fibers, and shelter. Then we learned to extract raw materials from her biosphere to create our own new synthetic materials. Now Bios is yielding us her mind—we are taking her logic.

Clockwork logic—the logic of the machines—will only build simple contraptions. Truly complex systems such as a cell, a meadow, an economy, or a brain (natural or artificial) require a rigorous nontechnological logic. We now see that no logic except bio-logic can assemble a thinking device, or even a workable system of any magnitude.

It is an astounding discovery that one can extract the logic of Bios out of biology and have something useful. Although many philosophers in the past have suspected one could abstract the laws of life and apply them elsewhere, it wasn’t until the complexity of computers and human-made systems became as complicated as living things, that it was possible to prove this. It’s eerie how much of life can be transferred. So far, some of the traits of the living that have successfully been transported to mechanical systems are: self-replication, self-governance, limited self-repair, mild evolution, and partial learning.

We have reason to believe yet more can be synthesized and made into something new. Yet at the same time that the logic of Bios is being imported into machines, the logic of Technos is being imported into life. The root of bioengineering is the desire to control the organic long enough to improve it. Domesticated plants and animals are examples of technos-logic applied to life. The wild aromatic root of the Queen Anne’s lace weed has been fine-tuned over generations by selective herb gatherers until it has evolved into a sweet carrot of the garden; the udders of wild bovines have been selectively enlarged in an “unnatural” way to satisfy humans rather than calves. Milk cows and carrots, therefore, are human inventions as much as steam engines and gunpowder are. But milk cows and carrots are more indicative of the kind of inventions humans will make in the future: products that are grown rather than manufactured.

Genetic engineering is precisely what cattle breeders do when they select better strains ofHolsteins, only bioengineers employ more precise and powerful control. While carrot and milk cow breeders had to rely on diffuse organic evolution, modern genetic engineers can use directed artificial evolution—purposeful design—which greatly accelerates improvements.

The overlap of the mechanical and the lifelike increases year by year. Part of this bionic convergence is a matter of words. The meanings of “mechanical” and “life” are both stretching until all complicated things can be perceived as machines, and all self-sustaining machines can be perceived as alive. Yet beyond semantics, two concrete trends are happening: (1)Human-made things are behaving more lifelike, and (2) Life is becoming more engineered. The apparent veil between the organic and the manufactured has crumpled to reveal that the two really are, and have always been, of one being.

Question 68

None of the following statements is implied by the arguments of the passage, EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn.

Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning.

[According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work.

Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become“screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . .

In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . .

There is an alternative. In “human-centred automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert's partner, not the expert’s replacement.

Question 69

In the Ebola misdiagnosis case, we can infer that doctors probably missed the forest for the trees because:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn.

Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning.

[According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work.

Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become“screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . .

In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . .

There is an alternative. In “human-centred automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert's partner, not the expert’s replacement.

Question 70

In the context of the passage, all of the following can be considered examples of human-centered automation EXCEPT:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn.

Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning.

[According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work.

Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become“screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . .

In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . .

There is an alternative. In “human-centred automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert's partner, not the expert’s replacement.

Question 71

From the passage, we can infer that the author is apprehensive about the use of sophisticated automation for all of the following reasons EXCEPT that:


Instruction for set :

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn.

Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning.

[According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work.

Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become“screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . .

In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . .

There is an alternative. In “human-centred automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert's partner, not the expert’s replacement.

Question 72

It can be inferred that in the Utrecht University experiment, one group of people was“aimlessly clicking around” because:


Instruction for set :

Read the passage carefully and answer the following questions

If you see police choking someone to death, you might choose to pepper-spray them and flee. You might even save an innocent life. But what ethical considerations justify such dangerous heroics? More important: do we have the right to defend ourselves and others from government injustice when government agents are following an unjust law? I think the answer is yes. But that view needs defending. Under what circumstances might active self-defence, including possible violence, be justified?

Civil disobedience is a public act that aims to create social or legal change. Think of Henry David Thoreau’s arrest in 1846 for refusing to pay taxes to fund the colonial exploits of the United States. In such a case, disobedient citizens visibly break the law and accept punishment, so as to draw attention to a cause. But justifiable resistance need not have a civic character. It need not aim at changing the law, reforming dysfunctional institutions or replacing bad leaders. Sometimes, it is simply about stopping an immediate injustice.

Some people say we may not defend ourselves against government injustice because governments and their agents have ‘authority’. But the authority argument doesn’t work. It’s one thing to say that you have a duty to pay your taxes or follow the speed limit. It is quite another to show that you are specifically bound to allow a government and its agents to use excessive violence and ignore your rights to due process.

Others say that we should resist government injustice, but only through peaceful methods. Indeed, we should, but that doesn’t differentiate between self-defence against civilians or government. The common-law doctrine of self-defence is always governed by a necessity proviso: you may lie or use violence only if necessary, that is, only if peaceful actions are not as effective. But peaceful methods often fail to stop wrongdoing. Eric Garner peacefully complained: ‘I can’t breathe,’ until he drew his last breath.

Another argument is that we shouldn’t act as vigilantes. But invoking this point here misunderstands the antivigilante principle, which says that when there exists a workable public system of justice, you should defer to public agents trying, in good faith, to administer justice. So if cops attempt to stop a mugging, you shouldn’t insert yourself. But if they ignore or can’t stop a mugging, you may intervene. If the police themselves are the muggers the antivigilante principle does not forbid you from defending yourself. It insists you defer to more competent government agents when they administer justice, not that you must let them commit injustice.

Some people find my thesis too dangerous. They claim that it’s hard to know exactly when self-defence is justified; that people make mistakes, resisting when they should not. Perhaps. But that’s true of self-defence against civilians, too. No one says we lack a right of self-defence against each other because applying the principle is hard. Rather, some moral principles are hard to apply.

However, this objection gets the problem exactly backwards. In real life, people are too deferential and conformist in the face of government authority and reluctant to stand up to political injustice. If anything, the dangerous thesis is that we should defer to government agents when they seem to act unjustly. Remember, self-defence against the state is about stopping an immediate injustice, not fixing broken rules.

Jason Brennan &  Marina Benjamin
This article was originally published at Aeon and has been republished under Creative Commons.

Question 73

What is the main point of the last two paragraphs?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions

If you see police choking someone to death, you might choose to pepper-spray them and flee. You might even save an innocent life. But what ethical considerations justify such dangerous heroics? More important: do we have the right to defend ourselves and others from government injustice when government agents are following an unjust law? I think the answer is yes. But that view needs defending. Under what circumstances might active self-defence, including possible violence, be justified?

Civil disobedience is a public act that aims to create social or legal change. Think of Henry David Thoreau’s arrest in 1846 for refusing to pay taxes to fund the colonial exploits of the United States. In such a case, disobedient citizens visibly break the law and accept punishment, so as to draw attention to a cause. But justifiable resistance need not have a civic character. It need not aim at changing the law, reforming dysfunctional institutions or replacing bad leaders. Sometimes, it is simply about stopping an immediate injustice.

Some people say we may not defend ourselves against government injustice because governments and their agents have ‘authority’. But the authority argument doesn’t work. It’s one thing to say that you have a duty to pay your taxes or follow the speed limit. It is quite another to show that you are specifically bound to allow a government and its agents to use excessive violence and ignore your rights to due process.

Others say that we should resist government injustice, but only through peaceful methods. Indeed, we should, but that doesn’t differentiate between self-defence against civilians or government. The common-law doctrine of self-defence is always governed by a necessity proviso: you may lie or use violence only if necessary, that is, only if peaceful actions are not as effective. But peaceful methods often fail to stop wrongdoing. Eric Garner peacefully complained: ‘I can’t breathe,’ until he drew his last breath.

Another argument is that we shouldn’t act as vigilantes. But invoking this point here misunderstands the antivigilante principle, which says that when there exists a workable public system of justice, you should defer to public agents trying, in good faith, to administer justice. So if cops attempt to stop a mugging, you shouldn’t insert yourself. But if they ignore or can’t stop a mugging, you may intervene. If the police themselves are the muggers the antivigilante principle does not forbid you from defending yourself. It insists you defer to more competent government agents when they administer justice, not that you must let them commit injustice.

Some people find my thesis too dangerous. They claim that it’s hard to know exactly when self-defence is justified; that people make mistakes, resisting when they should not. Perhaps. But that’s true of self-defence against civilians, too. No one says we lack a right of self-defence against each other because applying the principle is hard. Rather, some moral principles are hard to apply.

However, this objection gets the problem exactly backwards. In real life, people are too deferential and conformist in the face of government authority and reluctant to stand up to political injustice. If anything, the dangerous thesis is that we should defer to government agents when they seem to act unjustly. Remember, self-defence against the state is about stopping an immediate injustice, not fixing broken rules.

Jason Brennan &  Marina Benjamin
This article was originally published at Aeon and has been republished under Creative Commons.

Question 74

Which of the following responses would the author not agree with?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions

If you see police choking someone to death, you might choose to pepper-spray them and flee. You might even save an innocent life. But what ethical considerations justify such dangerous heroics? More important: do we have the right to defend ourselves and others from government injustice when government agents are following an unjust law? I think the answer is yes. But that view needs defending. Under what circumstances might active self-defence, including possible violence, be justified?

Civil disobedience is a public act that aims to create social or legal change. Think of Henry David Thoreau’s arrest in 1846 for refusing to pay taxes to fund the colonial exploits of the United States. In such a case, disobedient citizens visibly break the law and accept punishment, so as to draw attention to a cause. But justifiable resistance need not have a civic character. It need not aim at changing the law, reforming dysfunctional institutions or replacing bad leaders. Sometimes, it is simply about stopping an immediate injustice.

Some people say we may not defend ourselves against government injustice because governments and their agents have ‘authority’. But the authority argument doesn’t work. It’s one thing to say that you have a duty to pay your taxes or follow the speed limit. It is quite another to show that you are specifically bound to allow a government and its agents to use excessive violence and ignore your rights to due process.

Others say that we should resist government injustice, but only through peaceful methods. Indeed, we should, but that doesn’t differentiate between self-defence against civilians or government. The common-law doctrine of self-defence is always governed by a necessity proviso: you may lie or use violence only if necessary, that is, only if peaceful actions are not as effective. But peaceful methods often fail to stop wrongdoing. Eric Garner peacefully complained: ‘I can’t breathe,’ until he drew his last breath.

Another argument is that we shouldn’t act as vigilantes. But invoking this point here misunderstands the antivigilante principle, which says that when there exists a workable public system of justice, you should defer to public agents trying, in good faith, to administer justice. So if cops attempt to stop a mugging, you shouldn’t insert yourself. But if they ignore or can’t stop a mugging, you may intervene. If the police themselves are the muggers the antivigilante principle does not forbid you from defending yourself. It insists you defer to more competent government agents when they administer justice, not that you must let them commit injustice.

Some people find my thesis too dangerous. They claim that it’s hard to know exactly when self-defence is justified; that people make mistakes, resisting when they should not. Perhaps. But that’s true of self-defence against civilians, too. No one says we lack a right of self-defence against each other because applying the principle is hard. Rather, some moral principles are hard to apply.

However, this objection gets the problem exactly backwards. In real life, people are too deferential and conformist in the face of government authority and reluctant to stand up to political injustice. If anything, the dangerous thesis is that we should defer to government agents when they seem to act unjustly. Remember, self-defence against the state is about stopping an immediate injustice, not fixing broken rules.

Jason Brennan &  Marina Benjamin
This article was originally published at Aeon and has been republished under Creative Commons.

Question 75

What point does the author try to make through the given passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions

If you see police choking someone to death, you might choose to pepper-spray them and flee. You might even save an innocent life. But what ethical considerations justify such dangerous heroics? More important: do we have the right to defend ourselves and others from government injustice when government agents are following an unjust law? I think the answer is yes. But that view needs defending. Under what circumstances might active self-defence, including possible violence, be justified?

Civil disobedience is a public act that aims to create social or legal change. Think of Henry David Thoreau’s arrest in 1846 for refusing to pay taxes to fund the colonial exploits of the United States. In such a case, disobedient citizens visibly break the law and accept punishment, so as to draw attention to a cause. But justifiable resistance need not have a civic character. It need not aim at changing the law, reforming dysfunctional institutions or replacing bad leaders. Sometimes, it is simply about stopping an immediate injustice.

Some people say we may not defend ourselves against government injustice because governments and their agents have ‘authority’. But the authority argument doesn’t work. It’s one thing to say that you have a duty to pay your taxes or follow the speed limit. It is quite another to show that you are specifically bound to allow a government and its agents to use excessive violence and ignore your rights to due process.

Others say that we should resist government injustice, but only through peaceful methods. Indeed, we should, but that doesn’t differentiate between self-defence against civilians or government. The common-law doctrine of self-defence is always governed by a necessity proviso: you may lie or use violence only if necessary, that is, only if peaceful actions are not as effective. But peaceful methods often fail to stop wrongdoing. Eric Garner peacefully complained: ‘I can’t breathe,’ until he drew his last breath.

Another argument is that we shouldn’t act as vigilantes. But invoking this point here misunderstands the antivigilante principle, which says that when there exists a workable public system of justice, you should defer to public agents trying, in good faith, to administer justice. So if cops attempt to stop a mugging, you shouldn’t insert yourself. But if they ignore or can’t stop a mugging, you may intervene. If the police themselves are the muggers the antivigilante principle does not forbid you from defending yourself. It insists you defer to more competent government agents when they administer justice, not that you must let them commit injustice.

Some people find my thesis too dangerous. They claim that it’s hard to know exactly when self-defence is justified; that people make mistakes, resisting when they should not. Perhaps. But that’s true of self-defence against civilians, too. No one says we lack a right of self-defence against each other because applying the principle is hard. Rather, some moral principles are hard to apply.

However, this objection gets the problem exactly backwards. In real life, people are too deferential and conformist in the face of government authority and reluctant to stand up to political injustice. If anything, the dangerous thesis is that we should defer to government agents when they seem to act unjustly. Remember, self-defence against the state is about stopping an immediate injustice, not fixing broken rules.

Jason Brennan &  Marina Benjamin
This article was originally published at Aeon and has been republished under Creative Commons.

Question 76

All of the following statements are not true according to the passage except

Show Answer

Instruction for set :

Read the passage carefully and answer the following questions

If you see police choking someone to death, you might choose to pepper-spray them and flee. You might even save an innocent life. But what ethical considerations justify such dangerous heroics? More important: do we have the right to defend ourselves and others from government injustice when government agents are following an unjust law? I think the answer is yes. But that view needs defending. Under what circumstances might active self-defence, including possible violence, be justified?

Civil disobedience is a public act that aims to create social or legal change. Think of Henry David Thoreau’s arrest in 1846 for refusing to pay taxes to fund the colonial exploits of the United States. In such a case, disobedient citizens visibly break the law and accept punishment, so as to draw attention to a cause. But justifiable resistance need not have a civic character. It need not aim at changing the law, reforming dysfunctional institutions or replacing bad leaders. Sometimes, it is simply about stopping an immediate injustice.

Some people say we may not defend ourselves against government injustice because governments and their agents have ‘authority’. But the authority argument doesn’t work. It’s one thing to say that you have a duty to pay your taxes or follow the speed limit. It is quite another to show that you are specifically bound to allow a government and its agents to use excessive violence and ignore your rights to due process.

Others say that we should resist government injustice, but only through peaceful methods. Indeed, we should, but that doesn’t differentiate between self-defence against civilians or government. The common-law doctrine of self-defence is always governed by a necessity proviso: you may lie or use violence only if necessary, that is, only if peaceful actions are not as effective. But peaceful methods often fail to stop wrongdoing. Eric Garner peacefully complained: ‘I can’t breathe,’ until he drew his last breath.

Another argument is that we shouldn’t act as vigilantes. But invoking this point here misunderstands the antivigilante principle, which says that when there exists a workable public system of justice, you should defer to public agents trying, in good faith, to administer justice. So if cops attempt to stop a mugging, you shouldn’t insert yourself. But if they ignore or can’t stop a mugging, you may intervene. If the police themselves are the muggers the antivigilante principle does not forbid you from defending yourself. It insists you defer to more competent government agents when they administer justice, not that you must let them commit injustice.

Some people find my thesis too dangerous. They claim that it’s hard to know exactly when self-defence is justified; that people make mistakes, resisting when they should not. Perhaps. But that’s true of self-defence against civilians, too. No one says we lack a right of self-defence against each other because applying the principle is hard. Rather, some moral principles are hard to apply.

However, this objection gets the problem exactly backwards. In real life, people are too deferential and conformist in the face of government authority and reluctant to stand up to political injustice. If anything, the dangerous thesis is that we should defer to government agents when they seem to act unjustly. Remember, self-defence against the state is about stopping an immediate injustice, not fixing broken rules.

Jason Brennan &  Marina Benjamin
This article was originally published at Aeon and has been republished under Creative Commons.

Question 77

Which of the following statements can be inferred from the passage as true?

Show Answer

Instruction for set :

Read the passage carefully and answer the following question:

Certain forms of personal righteousness have become to a majority of the humans almost automatic. It is as easy for most of us to keep from stealing our dinners as it is to digest them, and there is quite as much voluntary morality involved in one process as in the other. To steal would be for us to fall sadly below the standard of habit and expectation which makes virtue easy. In the same way we have been carefully reared to a sense of family obligation, to be kindly and considerate to the members of our own households, and to feel responsible for their well-being. As the rules of conduct have become established in regard to our self-development and our families, so they have been in regard to limited circles of friends. If the fulfillment of these claims were all that a righteous life required, the hunger and thirst would be stilled for many good men and women, and the clew of right living would lie easily in their hands.

But we all know that each generation has its own test, the contemporaneous and current standard by which alone it can adequately judge of its own moral achievements, and that it may not legitimately use a previous and less vigorous test. The advanced test must indeed include that which has already been attained; but if it includes no more, we shall fail to go forward, thinking complacently that we have "arrived" when in reality we have not yet started.

To attain individual morality in an age demanding social morality, to pride one's self on the results of personal effort when the time demands social adjustment, is utterly to fail to apprehend the situation. It is perhaps significant that a German critic has of late reminded us that the one test which the most authoritative and dramatic portrayal of the Day of Judgment offers, is the social test. The stern questions are not in regard to personal and family relations, but did ye visit the poor, the criminal, the sick, and did ye feed the hungry?

All about us are men and women who have become unhappy in regard to their attitude toward the social order itself; toward the dreary round of uninteresting work, the pleasures narrowed down to those of appetite, the declining consciousness of brain power, and the lack of mental food which characterizes the lot of the large proportion of their fellow-citizens. These men and women have caught a moral challenge raised by the exigencies of contemporaneous life; some are bewildered, others who are denied the relief which sturdy action brings are even seeking an escape, but all are increasingly anxious concerning their actual relations to the basic organization of society.

The test which they would apply to their conduct is a social test. They fail to be content with the fulfillment of their family and personal obligations, and find themselves striving to respond to a new demand involving a social obligation; they have become conscious of another requirement, and the contribution they would make is toward a code of social ethics.

Question 78

According to the passage, which is the least suitable statement about "righteousness" mentioned in the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following question:

Certain forms of personal righteousness have become to a majority of the humans almost automatic. It is as easy for most of us to keep from stealing our dinners as it is to digest them, and there is quite as much voluntary morality involved in one process as in the other. To steal would be for us to fall sadly below the standard of habit and expectation which makes virtue easy. In the same way we have been carefully reared to a sense of family obligation, to be kindly and considerate to the members of our own households, and to feel responsible for their well-being. As the rules of conduct have become established in regard to our self-development and our families, so they have been in regard to limited circles of friends. If the fulfillment of these claims were all that a righteous life required, the hunger and thirst would be stilled for many good men and women, and the clew of right living would lie easily in their hands.

But we all know that each generation has its own test, the contemporaneous and current standard by which alone it can adequately judge of its own moral achievements, and that it may not legitimately use a previous and less vigorous test. The advanced test must indeed include that which has already been attained; but if it includes no more, we shall fail to go forward, thinking complacently that we have "arrived" when in reality we have not yet started.

To attain individual morality in an age demanding social morality, to pride one's self on the results of personal effort when the time demands social adjustment, is utterly to fail to apprehend the situation. It is perhaps significant that a German critic has of late reminded us that the one test which the most authoritative and dramatic portrayal of the Day of Judgment offers, is the social test. The stern questions are not in regard to personal and family relations, but did ye visit the poor, the criminal, the sick, and did ye feed the hungry?

All about us are men and women who have become unhappy in regard to their attitude toward the social order itself; toward the dreary round of uninteresting work, the pleasures narrowed down to those of appetite, the declining consciousness of brain power, and the lack of mental food which characterizes the lot of the large proportion of their fellow-citizens. These men and women have caught a moral challenge raised by the exigencies of contemporaneous life; some are bewildered, others who are denied the relief which sturdy action brings are even seeking an escape, but all are increasingly anxious concerning their actual relations to the basic organization of society.

The test which they would apply to their conduct is a social test. They fail to be content with the fulfillment of their family and personal obligations, and find themselves striving to respond to a new demand involving a social obligation; they have become conscious of another requirement, and the contribution they would make is toward a code of social ethics.

Question 79

The main purpose of the passage is to

Show Answer

Instruction for set :

Read the passage carefully and answer the following question:

Certain forms of personal righteousness have become to a majority of the humans almost automatic. It is as easy for most of us to keep from stealing our dinners as it is to digest them, and there is quite as much voluntary morality involved in one process as in the other. To steal would be for us to fall sadly below the standard of habit and expectation which makes virtue easy. In the same way we have been carefully reared to a sense of family obligation, to be kindly and considerate to the members of our own households, and to feel responsible for their well-being. As the rules of conduct have become established in regard to our self-development and our families, so they have been in regard to limited circles of friends. If the fulfillment of these claims were all that a righteous life required, the hunger and thirst would be stilled for many good men and women, and the clew of right living would lie easily in their hands.

But we all know that each generation has its own test, the contemporaneous and current standard by which alone it can adequately judge of its own moral achievements, and that it may not legitimately use a previous and less vigorous test. The advanced test must indeed include that which has already been attained; but if it includes no more, we shall fail to go forward, thinking complacently that we have "arrived" when in reality we have not yet started.

To attain individual morality in an age demanding social morality, to pride one's self on the results of personal effort when the time demands social adjustment, is utterly to fail to apprehend the situation. It is perhaps significant that a German critic has of late reminded us that the one test which the most authoritative and dramatic portrayal of the Day of Judgment offers, is the social test. The stern questions are not in regard to personal and family relations, but did ye visit the poor, the criminal, the sick, and did ye feed the hungry?

All about us are men and women who have become unhappy in regard to their attitude toward the social order itself; toward the dreary round of uninteresting work, the pleasures narrowed down to those of appetite, the declining consciousness of brain power, and the lack of mental food which characterizes the lot of the large proportion of their fellow-citizens. These men and women have caught a moral challenge raised by the exigencies of contemporaneous life; some are bewildered, others who are denied the relief which sturdy action brings are even seeking an escape, but all are increasingly anxious concerning their actual relations to the basic organization of society.

The test which they would apply to their conduct is a social test. They fail to be content with the fulfillment of their family and personal obligations, and find themselves striving to respond to a new demand involving a social obligation; they have become conscious of another requirement, and the contribution they would make is toward a code of social ethics.

Question 80

Which of the following statement can be inferred from the passage?

Show Answer

Instruction for set :

Read the passage carefully and answer the following question:

Certain forms of personal righteousness have become to a majority of the humans almost automatic. It is as easy for most of us to keep from stealing our dinners as it is to digest them, and there is quite as much voluntary morality involved in one process as in the other. To steal would be for us to fall sadly below the standard of habit and expectation which makes virtue easy. In the same way we have been carefully reared to a sense of family obligation, to be kindly and considerate to the members of our own households, and to feel responsible for their well-being. As the rules of conduct have become established in regard to our self-development and our families, so they have been in regard to limited circles of friends. If the fulfillment of these claims were all that a righteous life required, the hunger and thirst would be stilled for many good men and women, and the clew of right living would lie easily in their hands.

But we all know that each generation has its own test, the contemporaneous and current standard by which alone it can adequately judge of its own moral achievements, and that it may not legitimately use a previous and less vigorous test. The advanced test must indeed include that which has already been attained; but if it includes no more, we shall fail to go forward, thinking complacently that we have "arrived" when in reality we have not yet started.

To attain individual morality in an age demanding social morality, to pride one's self on the results of personal effort when the time demands social adjustment, is utterly to fail to apprehend the situation. It is perhaps significant that a German critic has of late reminded us that the one test which the most authoritative and dramatic portrayal of the Day of Judgment offers, is the social test. The stern questions are not in regard to personal and family relations, but did ye visit the poor, the criminal, the sick, and did ye feed the hungry?

All about us are men and women who have become unhappy in regard to their attitude toward the social order itself; toward the dreary round of uninteresting work, the pleasures narrowed down to those of appetite, the declining consciousness of brain power, and the lack of mental food which characterizes the lot of the large proportion of their fellow-citizens. These men and women have caught a moral challenge raised by the exigencies of contemporaneous life; some are bewildered, others who are denied the relief which sturdy action brings are even seeking an escape, but all are increasingly anxious concerning their actual relations to the basic organization of society.

The test which they would apply to their conduct is a social test. They fail to be content with the fulfillment of their family and personal obligations, and find themselves striving to respond to a new demand involving a social obligation; they have become conscious of another requirement, and the contribution they would make is toward a code of social ethics.

Question 81

Which of the following is not a consequence of the consciousness towards the demand of a social obligation?

Show Answer

Instruction for set :

Read the passage carefully and answer the following question:

Certain forms of personal righteousness have become to a majority of the humans almost automatic. It is as easy for most of us to keep from stealing our dinners as it is to digest them, and there is quite as much voluntary morality involved in one process as in the other. To steal would be for us to fall sadly below the standard of habit and expectation which makes virtue easy. In the same way we have been carefully reared to a sense of family obligation, to be kindly and considerate to the members of our own households, and to feel responsible for their well-being. As the rules of conduct have become established in regard to our self-development and our families, so they have been in regard to limited circles of friends. If the fulfillment of these claims were all that a righteous life required, the hunger and thirst would be stilled for many good men and women, and the clew of right living would lie easily in their hands.

But we all know that each generation has its own test, the contemporaneous and current standard by which alone it can adequately judge of its own moral achievements, and that it may not legitimately use a previous and less vigorous test. The advanced test must indeed include that which has already been attained; but if it includes no more, we shall fail to go forward, thinking complacently that we have "arrived" when in reality we have not yet started.

To attain individual morality in an age demanding social morality, to pride one's self on the results of personal effort when the time demands social adjustment, is utterly to fail to apprehend the situation. It is perhaps significant that a German critic has of late reminded us that the one test which the most authoritative and dramatic portrayal of the Day of Judgment offers, is the social test. The stern questions are not in regard to personal and family relations, but did ye visit the poor, the criminal, the sick, and did ye feed the hungry?

All about us are men and women who have become unhappy in regard to their attitude toward the social order itself; toward the dreary round of uninteresting work, the pleasures narrowed down to those of appetite, the declining consciousness of brain power, and the lack of mental food which characterizes the lot of the large proportion of their fellow-citizens. These men and women have caught a moral challenge raised by the exigencies of contemporaneous life; some are bewildered, others who are denied the relief which sturdy action brings are even seeking an escape, but all are increasingly anxious concerning their actual relations to the basic organization of society.

The test which they would apply to their conduct is a social test. They fail to be content with the fulfillment of their family and personal obligations, and find themselves striving to respond to a new demand involving a social obligation; they have become conscious of another requirement, and the contribution they would make is toward a code of social ethics.

Question 82

Which of the following statement is not in agreement with the features of the social test mentioned in the passage?

Show Answer

Question 83

The four sentences (labelled 1, 2, 3, 4) below, when properly sequenced, would yield a coherent paragraph. Decide on the proper sequencing of the order of the sentences and key in the sequence of the four numbers as your answer:

1. There has been a persistent and pathological pattern of serious mistakes from which nothing is learned, and which are soon repeated.

2. But the UK has not simply made a few errors here and there.

3. As a result, the country is waking up to yet another false dawn.

4. Epidemiologists will be the first people to admit that managing a pandemic is extremely difficult.

Show Answer

Question 84

Four sentences are given below. These sentences, when rearranged in proper order, form a logical and meaningful paragraph. Rearrange the sentences and enter the correct order as the answer.

1. Economic and social conditions made possible the introduction and development of the Georgian style in America and the same conditions nurtured and kept it alive so long as its influence continued to dominate the public taste.
2. And it was but natural that, with favourable domestic conditions, they should seek to emulate the luxury and more polished manner of life obtaining in the mother country, and the adoption of contemporary British architectural modes was one way in which that filial emulation found expression.
3. An era of general peace and growing prosperity in the early years of the eighteenth century permitted and encouraged the colonists to pay more heed to the material amenities of life than had previously been their wont.
4. When its latest phase passed over into the forms of the Classic Revival, a new order of society, actuated by different ideals, had arisen.

Show Answer

Question 85

The four sentences (labelled 1, 2, 3, and 4) given in this question, when properly sequenced, form a coherent paragraph. Decide on the proper order for the sentences and key in this sequence of four numbers as your answer.

1. Many of the Romans, upon their conquest of Gallia, were surprised at the degree and character of the philosophical knowledge possessed by the Druids, and many of them have left written records of the same, notably in the case of Aristotle, Cæsar, Lucan, and Valerius Maximus.

2. These people, generally regarded as ancient barbarians, really possessed a philosophy of a high order, which merged into a mystic form of religion.

3. As strange as it may appear to many readers unfamiliar with the subject, the ancient Druids, particularly those dwelling in ancient Gaul, were familiar with the doctrine of Reincarnation, and believed in its tenets.

4. The Christian teachers who succeeded them also bore witness to these facts, as may be seen by reference to the works of St. Clement, St. Cyril, and other of the early Christian Fathers.

Show Answer

Question 86

The four sentences (labelled 1, 2, 3, 4) below, when properly sequenced, would yield a coherent paragraph. Decide on the proper sequencing of the order of the sentences and key in the sequence of the four numbers as your answer:

1. On the contrary, the industry is happy reducing the wage bills, doing mechanisation and raising its profits.

2. During the pandemic, nearly 31 million families have moved down from the middle class and nearly 100 million people have lost jobs.

3. The industries that are most likely to create employment, i.e. the medium and small industries, are going down under and the large ones which do not create employment are the poster boys.

4. They are the ones that will get the 6 per cent productivity-linked incentive from the tax paid by the average taxpayers, with unknown consequences.

Show Answer

Question 87

Four sentences are given below. These sentences, when rearranged in proper order, form a logical and meaningful paragraph. Rearrange the sentences and enter the correct order as the answer.

1. From the exterior Ludlow Street Jail looks somewhat like a conservatory of music, but as soon as one enters he readily discovers his mistake.
2. The structure has 100 feet frontage, and a court, which is sometimes called the court of last resort.
3. That one thing is doing a great deal towards keeping quite a number of people here who would otherwise, I think, go away.
4. The guest can climb out of this court by ascending a polished brick wall about 100 feet high, and then letting himself down in a similar way on the Ludlow street side.

Show Answer

Question 88

Read the following paragraph and select the option that best captures its essence:

According to our common rule of civility, it would be a notable affront to an equal, and much more to a superior, to fail being at home when he has given you notice he will come to visit you. Nay, Queen Margaret of Navarre further adds, that it would be a rudeness in a gentleman to go out, as we so often do, to meet any that is coming to see him, let him be of what high condition soever; and that it is more respectful and more civil to stay at home to receive him, if only upon the account of missing him by the way, and that it is enough to receive him at the door, and to wait upon him.

Show Answer

Question 89

Read the following paragraph and select the option that best captures its essence:

In America, the intensity and power of men like Emerson and Whittier gave way to the pale romanticism and polite banter of the transition, or, what might even more fittingly be called the “post-mortem” poets. For these interim lyrists were frankly the singers of reaction, reminiscently digging among the bones of a long-dead past. They burrowed and borrowed, half archaeologists, half artisans; impelled not so much by the need of creating poetry as the desire to write it.

Show Answer

Question 90

The passage given below is followed by four summaries. Choose the option that best captures the author’s position.

According to the Sikh worldview, the whole is prior to its parts. The level of reality at which we are all individuals is a less fundamental reality than the level at which we are all One. This is a different worldview from that of most philosophers in the Western canon, who have usually posited the individual as fundamental. Western philosophers tend to think of the parts (us) as prior to the whole (if any whole even exists). Correspondingly, the Sikh tradition ends up giving a different story about morality from most of Western philosophy, one that’s grounded in a belief in the fundamental unity of all things.

Show Answer

Question 91

Read the following paragraph and choose the option that best captures its essence:

The most prominent journalistic response to fake news and other forms of misleading or false information is fact-checking, which has attracted a growing audience in recent years. We found that one in four respondents (25.3%) read a fact-checking article from a dedicated national fact-checking website at least once during the study period.Recent evidence suggests that this new form of journalism can help inform voters (Flynn, Nyhan, and Reifler, 2017). However, fact-checking may not effectively reach people who have encountered the false claims it debunks. Only 72% of respondents report being familiar with fact-checking. Among those that are familiar with fact-checking, only 68% report having a “very” or “somewhat favourable” view of fact-checking. Positive views of fact-checking are less common among fake news consumers (48%), especially those who support Trump (24%).

Show Answer

Question 92

Read the following paragraph and select the option which best captures its essence.

Understanding has only one function—immediate knowledge of the relation of cause and effect. Yet the perception of the real world, and all common sense, sagacity, and inventiveness, however multifarious their applications may be, are quite clearly seen to be nothing more than manifestations of that one function. So also the reason has one function; and from it, all the manifestations of reason are mentioned, which distinguish the life of man from that of the brutes, may easily be explained.

Show Answer

Question 93

Five sentences are given below. Four of these, when appropriately rearranged, form a logical and meaningful paragraph. Identify the sentence which does not belong to the paragraph and enter its number as the answer

1. In France, the Fourierist Considérant issued his remarkable manifesto, which contains, beautifully developed, all the theoretical considerations upon the growth of Capitalism, which are now described as "Scientific Socialism."

2. Socialism had to be a religion, and they had to regulate its march, as the heads of a new church.

3. The three great founders of Socialism who wrote at the dawn of the nineteenth century were so entranced by the wide horizons which it opened before them, that they looked upon it as a new revelation, and upon themselves as upon the founders of a new religion.

4. They put their faith, on the contrary, into some great ruler, some Socialist Napoleon.

5. Besides, writing during the period of reaction which had followed the French Revolution, and seeing more its failures than its successes, they did not trust the masses, and they did not appeal to them for bringing about the changes which they thought necessary.

Show Answer

Question 94

Five sentences are given below. Four of which when arranged in a proper order, form a logical and meaningful paragraph. Identify the sentence that does not belong to the paragraph and enter its number as your answer.

1. Is man free in action and thought, or is he bound by an iron necessity?
2. The idea of freedom has found enthusiastic supporters and stubborn opponents in plenty.
3. The alleged freedom of indifferent choice has been recognized as an empty illusion by every philosophy worthy of the name.
4. There are those who, in their moral fervour, label anyone a man of limited intelligence who can deny so patent a fact as freedom and opposed to them are others who regard it as the acme of unscientific thinking for anyone to believe that the uniformity of natural law is broken in the sphere of human action and thought.
5. There are few questions on which so much ingenuity has been expended.

Show Answer

Question 95

Five sentences related to a topic are given below. Four of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out.

1. Researchers see signs of this in sperm whales in the Galápagos and the Caribbean, in humpbacks across the South Pacific, in Arctic belugas, and in the Pacific Northwest’s killer whales.

2. Today many scientists believe some whales and dolphins, like humans, have distinct cultures.

3. Whale culture, it seems, is rattling timeworn conceptions of ourselves.

4. The possibility is prompting new thinking about how some marine species evolve.

5. Cultural traditions may help drive genetic shifts, altering what it means to be a whale.

Show Answer

Question 96

Five sentences related to a topic are given below. Four of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out.

1. It’s not binary, it’s not completely quantifiable.

2. The early stages of mental disorders are much more subtle and varied, and there is less agreement between clinicians.

3. I make this judgment based on a history from someone who knows the patient well largely to rule out other causes.

4. I can diagnose diabetes based on a number on a blood test: it is binary and quantifiable.

5. When I diagnose dementia, it’s based on my subjective judgment that the person’s cognition has declined.

Show Answer

Question 97

Five sentences related to a topic are given below. Four of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out.

1. The Covid-19 pandemic has exposed the paradox that while we are more connected, we are also more divided.

2. The age of individualism is passing.

3. A politics that strengthens belonging can reverse the excesses of individualism.

4. Now Covid-19 has highlighted our mutual dependence on one another and a desire for community.

5. The past 50 years in the West saw a celebration of unfettered freedom.

Show Answer

Question 98

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide in which blank (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence: For instance, small genetic mutations or environmental changes in biological evolution can lead to significant evolutionary shifts over time.

Paragraph: …..1…. The chaotic nature of nonlinear systems impacts more than just mathematics …..2….. The path of evolution is not linear or predictable; instead, it is full of unexpected twists and turns, like the movement of a pebble down the mountain …..3..… Similarly, in economics, markets function as complex, nonlinear systems ….4….. Rumours about a company or slight changes in interest rates can act as triggers, setting off substantial and unanticipated shifts.

Show Answer

Question 99

There is a sentence missing in the paragraph below. Look at the paragraph and decide where (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence
: Far from diminishing over the years, the resentment bordering on outrage has continued apace

Paragraph : In 1781, US founding father John Witherspoon coined the term “Americanism” and started complaining about the way words concocted by the ex-colonists were polluting the purity of the English language …..1..... Just weeks ago, in the Telegraph, Simon Heffer whinged that - as the headline of his piece put it - “Americanisms are poisoning our language”.....2..... So it can come as a shock to Britons to learn that their words and expressions have been worming their way into the American lexicon just as much, it would appear, as the other way around …..3..… I date the run-up (that’s an alternate meaning of run-up: “increase”) in Britishisms to the early 1990s, and it’s surely significant that this was when such journalists as Tina Brown, Anna Wintour, Andrew Sullivan and Christopher Hitchens moved to the US or consolidated their prominence there …..4…..

Show Answer

Question 100

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide in which blank (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence: Tobacco companies delayed regulation for decades.

Passage: …..1….. For the tobacco and fossil fuel companies, “doubt-mongering” and the exploitation of uncertainty have been very successful strategies indeed…..2….. Fossil fuel companies, in full knowledge of the facts of carbon’s atmospheric-warming effects, stymied action on climate change. The makers of asbestos, ozone, lead paint, pesticides, pharmaceuticals, et cetera, have all used such strategies to maintain profits, despite the number of people harmed by those industries …..3….. Some have called this the deployment of “willful ignorance.”  …..4….. Oreskes and others argue that such strategies are “conscious, deliberate, and organized.” For, as Oreskes stresses, uncertainty doesn’t need to be manufactured by for-hire amoralists at public relations firms.

Show Answer

Question 101

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide in which blank (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence : So far, there is no evidence of an AI-induced productivity surge in the economy at large.

Passage: Could generative AI prompt similarly profound changes?.....1…..A lesson of previous technological breakthroughs is that economywide, they take ages to pay off…..2…..The average worker at the average firm needs time to get used to new ways of working…..3…..The productivity gains from the personal computer did not come until at least a decade after it became widely available…..4….. According to a recent survey from BCG, a majority of executives said it will take at least two years to “move beyond the hype” around AI. Recent research by Oliver Wyman, another consultancy, concludes that the adoption of AI “has not necessarily translated into higher levels of productivity—yet”.

Show Answer

Question 102

There is a sentence that is missing in the paragraph below. Look at the paragraph and decide in which blank (option 1, 2, 3, or 4) the following sentence would best fit.

Sentence : Human cooperation requires firm answers rather than just questions, and those who foam against stultified religious structures end up forging new structures in their place.

Paragraph : …..1….. From a historical perspective, the spiritual journey is always tragic, for it is a lonely path fit for individuals rather than for entire societies …..2….. It happened to the dualists, whose spiritual journeys became religious establishments. It happened to Martin Luther, who after challenging the laws, institutions and rituals of the Catholic Church found himself writing new law books, founding new institutions and inventing new ceremonies .….3…... It happened even to Buddha and Jesus …..4….. In their uncompromising quest for the truth they subverted the laws, rituals and structures of traditional Hinduism and Judaism. But eventually more laws, more rituals and more structures were created in their name than in the name of any other person in history.

Show Answer

Related Blogs

Frequently Asked Questions