Recently, reading disfluency has been used as an instructional strategy. Contradictory to conventional belief, it assumes fonts that are difficult to read may enhance the recall of text messages. Positive outcomes have been reported with a manipulation of disfluency with English fonts. This study focused on the effect of disfluency by varying Japanese kanji fonts for students who are learning Japanese as a second language. Various Kanji fonts were presented interactively in an e-book in which learning texts were embedded. The results suggested that the disfluent font of the learning texts could produce significant improvements in reading retention…
“In Japanese, kanji characters are independent from spoken language and function non-phonetically with speakers exhibiting right-hemispheric advantage in their processing…
This study intended to examine the effects of disfluent kanji fonts on retention of reading for international learners who took Japanese as a second language. An experiment was done with the e-book, which possessed adaptive functions that are able to select suitable kanji fonts for each individual learner…
According to learner’s rating, the most disfluent font was Gyoshotai. The average ranking was 6.95. 95% participants evaluated it as the most hard-to-read font. Reversely, the average ranking of Gothic was 1.05, and 95% participants evaluated it as the most easy-to-read one. In addition, an average raking of 3.00 was obtained for the most popular font: Kyokashotai, which is recommended by Japanese Ministry of Education as the regular kanji characters used in textbook…
The more the font was difficult to read, the higher the score on the test…
The results of the study suggested that adaptively upgrading the disfluent level of fonts in learning materials could produce significant improvements in learning outcomes. However, further investigations are necessary to examine the disfluency effects for native Japanese. Meanwhile, in some situations, perceptual disfluency does not affect memory because it is not a desirable difficulty and the degree to which people’s judgments of learning (JOLs) reflected the consequences of processing disfluent information.”
If you’re wondering what all those posts were, these are the early drafts (possibly final) of my last big project here for a while.
I’ve been circling these ideas for a while, trying to figure out a framework for continous, effective learning despite the inefficiency of traditional education and the difficulties of getting materials into Anki.
Are you afraid of Anki? It’s time to dissolve it into the Void.
As we’ve seen, we don’t need to be tethered to Anki. Effective learning with spacing and retrieval does not require fancy algorithms with multiple buttons, or simple Q/A formats as opposed to practice reminders or free recall cues, etc. Yet it’s an awesome, flexible and powerful tool to semi-automate your learning, so we should definitely use it.
But we must slave Anki to the guiding principles that we’ve learned.
That overwhelming feeling of having ‘cards’ due–ignore it, that’s merely an optimization. If you have no pressing need, it doesn’t matter when you go back, as long as you eventually get to it, you’ll relearn it easily enough. If you don’t get to it for ages, then it’s unlikely you needed it. When you need it, it’ll be conveniently waiting. Simply be as consistent as you deem appropriate, relearn 'lapses’ as desired and necessary. Prioritize what’s actionable, compelling, and urgent. You may want to explicitly classify cards by such urgency, and use with this add-on.
You can go back and relearn anything on your own schedule, as well. The Anki order is just a guideline to optimize and spare you the minute scheduling details. Even if you entirely discard the schedule and cram, you’ll merely be inefficient and compromising the long-term for the short-term–which presumably, if you’re willing to make that sacrifice cramming, you have a good reason for. You can always relearn for the long-term again, later.
In a sense, there are no due cards. There are just cards that are available for potentially optimized retrieval practice. As far as cards go, really, the cues are what matter. Think of these as ‘mental processing triggers’, supplemented by references.
In the database sense, the cues->references can be one-to-many relationships. You want uniquely identifiable cues (not necessarily in the first field in Anki, but some sort of primary field) where useful. That is, you can recycle cues (e.g. for savoring), but when organizing a network, you want uniqueness. You can have as many decks and cards as you like, clustered around these identifiable process triggers. Old decks just become part of a database of potential knowledge to extract. To clarify, the identifier would be more specific than tags, located in the primary cue field, and could link together cards that share some aspect of this key… like cards that deal with a particular formula or grammar point or location. Maybe we should work in the idea of foreign keys? Or let’s drop the analogy.
The notion of being installed at Anki as it sends cards flying at you is terrible. Limit your study sessions–or session–each day to time and need, not automatically generated, inaccurate (the adaptiveness is an illusion, Anki can’t read your mind) quotas. Anki and all other software at present is just a dumb scheduling app, storing an organized network of ordered data for us to convert into knowledge. It’s made to ease our burdens, not add to them.
When software does become adaptive, it should be centered around this sort of associative, predictive learning, offering items to fill gaps or flesh out knowledge. Perhaps the first step in this direction will be tag clouds of extra candidate questions based on cue identifiers, and/or query expansion (as with Google searches). I’ll get to why things like identifiers and queries are useful in a moment. Dehaene was on to something with his idea of an OS with a global workspace that can predict our needs; as was Ullman with the notion of an OS that is scalable to our skill levels, like a spaced retrieval OS.
Cues should be open-ended as often as surgically targeted, allowing for different kinds of retrieval practice, acting as a non-intrusive scheduler.
Bury, delete, suspend where you need, depending on what you want to save for the future or study each day or what seems redundant.
There is no new and old, really: that is, if you find you get stuck always studying the new (likely indicating not enough selectiveness and a misconception that ‘motion = action’, to paraphrase Hemingway), remember that the old is made new again with relearning and reconsolidation.
In a sense you can think of two sorting columns for cards–primary: need/desire, secondary: dueness. Review what you want or feel fuzzy on, ordered by dueness (decreasing intervals [increasing difficulty] for relearning, decreasing difficulty for learning, as we noted previously).
When you’re answering one card, you may realize you’re fuzzy on something related–study that, on-the-fly. You end up with cards each session being related clusters, even if not directly adjacent in our memory network.
Anki’s AR lens metaphor gets expanded to become a personalized information filter, which you create and impose on the world as you navigate it, adapting it as you wish.
Create new cues/triggers as you review if they don’t exist already, as desired/needed. Perhaps ‘practice this Python method’ gets expanded into a new set of triggers to store with their reference data–that is, into a new set of cards. Instead of storing referential information on cards, you might also just specify PDF page numbers, titles, links, etc.
This emphasis on ‘processing’ vs. the algorithm reminds me of the notion that computer science is more served by ‘process expression’ than ‘algorithms’.
Feel free to set your dropout/retirement criterion for cards low, depending on future use. We discussed previously how flexible the # of successful reviews is before retiring cards.
We can associate ‘Anki’ with an SR-OS as noted before, as ‘Anki’ doesn’t really exist. Just some hackable code for organizing and presenting data on our devices.
We can go somewhere with this conception. Studying should be that studying an ‘object’ can cue you to study others, regardless of scheduling, depending on your metacognitive awareness of your retention for the potential others. We want quick access, on-the-fly… filtering. As you may know, I use filtered decks exclusively. The filtering should be a mixture of targeted for immediate needs/desires and general to supplement with anything left over.
The legendary Alan Kay has described his original creation of object-oriented programming as a pure kind of OOP, he wanted to get rid of data, at the time with no operating system, just an ‘object exchanger’. A different conception of OOP to what’s current: more verb-like, a hybrid OOP/functional style, as it were–he compares the ‘pure’ original Smalltalk to Lisp, a functional language.
“ … what you really want is objects that are migrating around the net, and when you need a resource, it comes to you — no operating system…
The user interface’s job was to ask objects to show themselves and to composite those views with other ones.” - via
Perhaps we can think of cues as functions, and answers as objects. Input would be the feedback/study process, output would be the knowledge extraction, with the ability to drill-down or roll-up, OLAP style. Or the cues are objects, the answers and other info are attributes, testing is the method, and your mind is the metafunction? Or maybe not. Let’s move on.
Anyway, I believe we should use software for spaced retrieval in the same way. We should basically have an opt-in experience at all times. ‘Decks’ don’t really exist. ‘Due’ cards don’t really exist. They are in the Void. We don’t need to see them when we’re in ‘client’ mode.
Brian Kernighan described the benefits of Unix this way:
“Someone once said that software stands between the user and the machine, and to me this conveys this picture of a great wall of software up there that you have to overcome to get anything done…
[Instead, with Unix, we have]… the shell, which is the interface… it sits there and waits for you to type commands at it and then interprets them… building blocks can be glued together in a variety of different ways… ” - via
This is what we want: command-line Anki! Just for the querying, mind you, we still want the multisensory digital presentations.
So we want to ditch decks, due counts, etc. on the ‘landing page’, and make use of command-line filtering. So you’re essentially compositing or pipelining ‘decks’, collections of cards to review in a session, rather than relying on the original ‘decks’.
So how do we achieve all this?
I would suggest that the dubiously named ‘Blind Anki’ add-on is essential: it removes from sight all the counts. This add-on supplements it by removing the headers and card stats message.
Next, you’d want to collapse all your decks into a deck titled ‘_’ (just an underscore), or something like that, hiding them. You can’t toggle ‘Void Anki’ (the name I’m giving the add-on), but perhaps that’s for the best, we don’t want to be tempted. During reviews, it always just says ‘1 new card’–this fake number is necessary to prevent Anki from thinking you finished the deck, apparently.
Finally, create a filtered deck, name it ‘void’ or something cool like that. You can always hit ‘F’ to create new filtered decks on the fly. I may look into creating an add-on that simply reopens the options for the single main filtered deck, maintaining the conceptual simplicity of the command-line analogy.
The ‘Create filtered deck from browser’ add-on is also useful: you can save searches and then use those to issue ‘command line statements’ (create filtered decks). ‘Autocomplete’ might be nice, though.
For the filtered deck options, I’d recommend adapting them to the CriteriEN process with the aforementioned notes about proximal vs. peak-end ordering for old or new cards; also use the low stakes and low key add-ons, ideally.
So as you interface with Anki, you’re just summoning what you want by ‘pipelining’ boolean queries together, as targeted or vague as you want, changing on-the-fly as you like. This works for interleaving, as well: both within ‘strands’ (e.g. Japanese input practice), or across them (e.g. input, output).
tag:urgent deck:_::CompSci ( algorithm and (node or tree) )
So don’t be intimidated by Anki, don’t make it an opt-out chore where you eventually just put off study entirely. Don’t let the illusion of adaptiveness or the Supermemo model constrain your notion of how to apply spaced retrieval, marginal maximization costing you effective learning by compromising your perspective of the process, inducing an unfortunate all-or-nothing approach.
Bonus: I also recommend the ‘quick reschedule’ add-on, which lets you change the interval of cards as you review; if ‘r’ doesn’t work as a hotkey, try ‘v’ or something by editing the add-on .py in your text editor. I also like this add-on which places clickable tags on cards. You can change the placement of the tags in the Anki card template editor with {{Tags}}.
“The memory palace serves not as a learning tool but as a method to organize what’s already been learned so as to be readily retrievable at essay time. This is a key point and helps to overcome the typical criticism that mnemonics are only useful in rote memorization.
To the contrary, when used properly, mnemonics can help organize large bodies of knowledge to permit their ready retrieval…
The value of mnemonics to raise intellectual abilities comes after mastery of new material, as the students at Bellerbys are using them: as handy mental pockets for filing what they’ve learned, and linking the main ideas in each pocket to vivid memory cues so that they can readily bring them to mind and retrieve the associated concepts and details, in depth, at the unexpected moments that the need arises.” - via
Personally, I’ve always enjoyed the idea of a memory palace, but I’ve never been able to select an environment I’d want to have ingrained in my mind, and I take a minimalist approach to decorating, anyway. My memory palace is something more abstract, all pages and files.
It’s an interesting complement to the idea of getting better at learning: the use of a memory palace to organize what you encode to better retrieve it in the future. A database, of sorts.
It goes back to the importance of formulating and retaining cues: you bundle material into key ideas, anchor those keys to an area of your memory palace and associate them with something memorable, then when you want to retrieve information–using the keys as a reference–you unbundle. If, as you enumerate the key ideas, you forget, you move on and fill it in later.
As I noted, I think it’s better to focus on the idea of ‘pointers’ and summaries, where there isn’t an obsession with storing detailed memories (unless absolutely necessary) at the onset, so much as building a web of associations and references that becomes richer over time. Maybe even better a metaphor is a series of endless gates… a virtual Fushimi Inari Taisha.
A note on the environment I forgot to mention, I like the idea of building successive relearning into your environment, ensuring that in a roughly spaced way you have opportunities for active usage; in education they have the idea of the ‘spiral’.
We can connect this idea of controlling your environment to ‘mindless eating’ as well as the 2-minute rule for ending procrastination, or the 20-second rule (and event-based cues) for habit-formation. What we want is both mindful and mindless spaced retrieval.
That balance is crucial: ‘don’t mistake motion for action’; we want deliberation for efficiency, but we don’t want ‘procrastination-by-organization’, or to ‘overfit’ ourselves to training sets. Descriptive labels for supervised training is useful, but we want test sets, and unsupervised learning and spontaneous usage, as well. We want ‘useful forgetting’, but also retention. We want to accumulate useful things without hoarding; be minimalist and maintain organization for your future self. But organize descriptively, not optimizing prematurely or being a slave to perfect order, and leave room for growth and decay (of memories).
Transactive memory is a double-edged sword–we’re less likely to remember things we record, yet it reduces the overhead.
“Sparrow et al. originally claimed that reliance on computers is a form of transactive memory, because people share information easily, forget what they think will be available later, and remember the location of information better than the information itself.” - via
There’s the alleged Google effect, but I also wonder if Google can’t be a fine SRS in itself. The flow of browsing can be a splendid study session. Perhaps when you bookmark items, you can store questions with them.
Study sessions should indeed flow, a give and take of being shaped by what we learn as we shape it, and we know that the region of proximal learning, studying easier items first, helps with this. But there’s also the peak-end rule and remembered utility: we want to end on a high note, with easier items. How to combine them? I would suggest that for what we haven’t learned yet, perhaps starting harder and ending easier is best. For what we’ve already learned, we aim for the most well-known (easiest) first, then the more newly learned (harder) items.
In this sense, when studying, we prioritize new things (hardest to easiest), and should prioritize the selective study of what we want or need from what we’ve learned, ordering the information secondarily by easiest to hardest.
In the grand scheme of things, we want to satisfice, take a good enough approach at first, then maximize. We can move the maximization phase early for newly learned things, e.g. criterion, then relax once we move on to ‘relearning’. Just focus on the accumulation, the incremental additive process.
“Being tested on information a
certain number of times is much better than simply studying the information an
equivalent number of times, so long as a
person gets feedback (the right answer) if she or he does not know the answer.
Our research assistants would quiz
certain pieces of information three times, other information once, and other
information not at all. The quizzes occurred in the classroom and did not count
for a grade – the students seemed to enjoy them as a fun activity that broke up
the lectures. They used response systems
– “clickers” – to signal the answer to questions and then got immediate
feedback…” - via
“People remember things better,
longer, if they are given very challenging tests on the material, tests at
which they are bound to fail. In a series of experiments, they showed that if
students make an unsuccessful attempt to retrieve information before receiving
an answer, they remember the information better than in a control condition in
which they simply study the information.
Trying and failing to retrieve the answer is actually helpful to learning…
Students were asked to read the
essay and prepare for a test on it. However, in the pretest condition they were
asked questions about the passage before reading it such as “What is total
color blindness caused by brain damage called?” Asking these kinds of
question before reading the passage obviously focuses students’ attention on
the critical concepts. To control this “direction of attention” issue…
Students might consider taking the questions in the back of the
textbook chapter and try to answer them before reading the chapter. (If
there are no questions, convert the section headings to questions. If the
heading is Pavlovian Conditioning, ask yourself What is Pavlovian
conditioning?). Then read the chapter and answer the questions while reading
it. When the chapter is finished, go back to the questions and try answering
them again. For any you miss, restudy
that section of the chapter. Then wait a few days and try to answer the
questions again (restudying when you need to). Keep this practice up
on all the chapters you read before the exam and you will be have learned the
material in a durable manner and be able to retrieve it long after you have
left the course.
Of course, these are
general-purpose strategies and work for any type of material, not just
textbooks. And remember, even if you get the questions wrong as you self-test
yourself during study the process is still useful, indeed much more useful than
just studying. Getting the answer wrong is a great way to learn.” - via
“But what if, instead, you took a
test on Day 1 that was just as comprehensive as the final but not a
replica? You would bomb the thing, for sure. You might not understand a single
question. And yet as disorienting as that experience might feel, it would alter
how you subsequently tuned into the course itself — and could sharply improve
your overall performance.
This is the idea behind pretesting, one of the most exciting
developments in learning-science… Rather, the attempts themselves change
how we think about and store the information contained in the questions. On
some kinds of tests, particularly multiple-choice, we benefit from answering
incorrectly by, in effect, priming our brain for what’s coming later.
That is: The (bombed) pretest drives home the information in a way that studying
as usual does not. We fail, but we fail forward…
But the emerging study of
pretesting flips that logic on its head. “Teaching
to the test” becomes “learning to understand the pretest,” whichever one
the teacher chooses to devise. The test,
that is, becomes an introduction to what students should learn, rather than a
final judgment on what they did not.” - via
“Asking questions also helps
you understand more deeply. Say you’re learning about world history, and
how ancient Rome and Greece were trading partners. Stop and ask
yourselfwhy they became trading partners. Why did they become
shipbuilders, and learn to navigate the seas? It doesn’t always have to be why
— you can ask how, or what.
"In asking these questions, you’re trying to explain, and in doing
this, you create a better understanding, which leads to better memory and
learning. So instead of just reading and skimming, stop and ask yourself things
to make yourself understand the material.” - via
“Along these lines, Bjork also
recommends taking notes just after class, rather than during—forcing yourself
to recall a lecture’s information is more effective than simply copying it from
a blackboard. “Get out of court stenographer mode,” says Bjork. You have to
work for it.
The more you work, the more you
learn, and the more you learn, the more awesome you can become.” - via
“When you read a text or study lecture notes, pause periodically to ask yourself questions like these, without looking in the text: What are the key ideas? What terms or ideas are new to me? How would I define them? How do the ideas relate to what I already know? Set aside a little time every week throughout the semester to quiz yourself on the material in a course, both the current week’s work and material covered in prior weeks.
‘Taking notes doesn’t have to be a bad thing. But it’s the way many of us do it—simply transcribing information we read or hear in a lecture, rather than making connections and interpretations—that hinders learning…
A court stenographer can take down every word from a day in court and not be able to tell you what the case was about later in the day,’ Bjork points out. Fruitful note-taking, he says, is when it’s done in fits and starts, more of a commentary than a transcription. Better yet, take notes right after you’ve read something or heard a lecture—the act of recall is far more powerful as a learning tool than highlighting the material on the page or copying notes from a blackboard.” - via
“… students take a lot of
tests. It is what happens afterward—or
more precisely, what does not happen—that causes these tests to fail
to function as learning opportunities. Students often receive little
information about what they got right and what they got wrong. “That kind of
item-by-item feedback is essential to learning, and we’re throwing that
learning opportunity away,” she says. In addition, students are rarely
prompted to reflect in a big-picture way on their preparation for, and
performance on, the test. “Often students just glance at the grade and then
stuff the test away somewhere and never look at it again,” Lovett says. “Again,
that’s a really important learning opportunity that we’re letting go to waste…”
The idea, Lovett says, is to get
students thinking about what they did not know or did not understand, why they
failed to grasp this information and how they could prepare more effectively in
advance of the next test…
Over time, repeated exposure to this testing-feedback loop can motivate
students to develop the ability to monitor their own mental processes…
Gosling and Pennebaker, who… published their findings on the
effects of daily quizzes… credited the
“rapid, targeted, and structured feedback” that students received with boosting
the effectiveness of repeated testing.” - via
“For instance, students could use a
version of the Cornell notetaking system in which they write down key words in
the margins of their notes that refer to each of the to-be-learned concepts and
then go through each key word and attempt to retrieve and write down the
correct concept from memory.
You could also encourage students
to use flashcards for the most important concepts. Importantly, for successive
relearning, after they attempt to retrieve each concept, they should check the
correct answer to evaluate whether they have it or not. If they do not have it,
then they should mark that concept and return to it again later in the study
session. In addition, they should continue doing so until they can correctly
retrieve it.
Once they do correctly retrieve it,
then they can remove the concept from further practice during that particular
session.” - via
“Questions with rich semantic content enhance subsequent learning even when feedback is delayed, but questions that consist of a single cue word (e.g., whale – ??? ) do not. There are times when asking a question is possible but providing an answer immediately afterward is not. The classic example is when someone takes a test. Many educators worry that when a student makes an error on a test, and it is not corrected immediately, his learning suffers. The present results suggest the opposite: Taking a test and getting an answer incorrect enhances subsequent learning, even if the learner is not told the correct answer until a substantial amount of time later. From a practical perspective, the results suggest that asking someone a meaningful question that he or she cannot answer enhances subsequent learning, even if the correct answer is not provided until after a substantial delay. These results give comfort to educators who face situations that require tests without immediate feedback.” - via
ja-dark: Don’t let your focus dissolve into passivity during learning. Obviously, feedback is crucial; yet here’s the thing–it doesn’t have to be immediate, and we don’t need to overemphasize the avoidance of errors. We can take our time making mistakes. What this means is that, again, you can do effective spaced retrieval for ideal learning without SRSing, throughout each day, even if you can’t get immediate feedback, and even before you start ‘encoding’, and without having to pass/fail material quickly.
I like how this ties into low stakes testing, also–it can even be related to a notion of ‘making mistakes on purpose’ to learn better.
Recall that I noted this is in a sense, a process of cue learning, with an awareness of associations–build a network of knowledge like a spider weaving a web, associating into agglomerative clusters, branching off as necessary. This is a process of building a mental annotated bibliography, summarizations and pointers really, rather than trying to memorize answers right away and all the time.
As Douglas Rushkoff noted a long time ago, indiscriminate data storage is for suckers–we don’t need to ‘surrender to the algorithm’ to ‘learn everything’. It’s all about access and the ability to process information, using divergent thinking to balance convergent thinking. At the end, we want the cues to fall away, but they’re always there make things easier. So let’s worship the glitch, if anything. Meditate on the glitch in the Void, rather.
We want to record or memorize cues while encoding so that we can make retrieval attempts–successful or unsuccessful–no matter where we are. The very act of doing so enhances learning. Referencing at some point later for feedback is easy, but well-formulated questions is not. Being able to do that on-the-fly allows us to structure our mental frameworks, increasing our metacognitive awareness of what we don’t know, the gaps to fill, links to integrate, memories to practice.
So practically, we want to learn to formulate cues as we listen and read. If the format allows, stopping periodically (every few minutes, or every page, or chapter, etc.) for retrieval attempts (MC/free/cued recall, whatever) of what we just learned, or of things that occur to us, and with or without immediate feedback, is a good option. A skill here is finding a balance of just the right quality and quantity of questions, we don’t want to turn a 3 hour lecture into 1000 questions.
If you need to delay your feedback for a full day, this is fine, just make sure the questions are meaningful, where you know there’s an intrinsic correct answer; don’t transcribe everything, but don’t be half-hearted either, focus on good questions.
So just spending time throughout the day asking yourself questions, attempting recall of what you just encountered, as long as you get feedback in a day, it’s okay, just treat it like ‘pretesting with delayed feedback’ (which can enhance learning).
“Writing down details of the material, summarizing it or just thinking about it rather than passively reading is really effective,” Karpicke said.
The summarization idea is debatable, but is more effective than transcribing or nothing at all. It’s a matter of re-formulating actively, precisely, and concisely. A key aspect of it is the explanatory effect, which we’ve discussed before.
“Self-explaining involves explaining the content of a lesson to oneself during learning…
Learning by teaching involves explaining to-be-learned material with the goal of helping others learn. Thus, teaching is similar to but distinct from learning by self-explaining.” - via
Bjork’s advice not to record notes during lecture seems risky, if you can’t ensure you get feedback later through, perhaps, D2L or Blackboard.
If you have continual access to something, there’s always the option of purely mental cue formulation, and you can come back later. If you have lectures on video which can be bookmarked, that’s handy. Or you can write down page numbers, URLs.
And to reiterate, as long as you have the cues, you can study without immediate feedback. To go back to the idea of storage–while trying to memorize everything is inefficient, being organized as you store things you may want to retain will greatly aid your future self. We want a schema that allows us to fluidly do what we want to do with minimal referencing, while also streamling the referencing process.
When you read chapters, you can read them out of order.
Try to get used to predicting answers first, learning to process information before being given the answer.
Keep your questions open and flexible so that you can extend them as you learn, creating subquestions or grouping under other questions. This can allow you to ‘chunk’ cues so that you have fewer to maintain.
In expanding our notion of timing and practice, remember that we can simply schedule practice sessions as “practice this skill now”, so that you can use worked examples to do math, or physical practice, etc., for however long you wish.
For programming, it could be as simple as making note of certain functions to use successively in programming tasks.
Finally, there’s ‘concept mapping’, which doesn’t work as well as retrieval, but as a retrieval exercise in itself–recall in the form of a concept map, seems equally effective as other retrieval types. Perhaps it isn’t worth the effort.