36th SuperMemo anniversary event: Difference between revisions
Jump to navigation
Jump to search
mNo edit summary |
mNo edit summary |
||
Line 54: | Line 54: | ||
* There is the question of what is the most important thing to consider when buying flats. The answer is "the location and the location and the location". I think the most important thing in freelearning is the quality (3x) of the learning material and the didactic preparation. And unfortunately SM does not help here. | |||
** I often learn from horrible sources. If you find a golden nugget, after 20 min. of timewasting, it may still pay. With chatbots now, all sources seem digestible, however bad. All we care about is good ideas inside! | |||
* Does Woz know about mnemonic medium? It's an augmented reading environment supported by spaced repetition. I think it has many common ground with the incremental reading of SuperMemo. | |||
Link: https://notes.andymatuschak.org/zPNoWjs5jQ7iMDpxxBtjSbg | |||
** ask Biedalak in case I mislead but ... "The dataset for the FSRS benchmark comes from 20 thousand people who use Anki, a flashcard app. In total, this dataset contains information about ~1.5 billion reviews of flashcards" ... with supermemo.com or supermemo.net there was/is a horrible bias in data: lot of people who start and quit; those who do not stick to 20 rules; in other words, lots of noise; with Murakowski we "quarreled" a lot about this. He wanted "big data", while I insisted on going from person to person, starting with myself, and include crafty long-term users with a constellation of habits. You seem to be in Murakowski camp 🙂 I insist that lots of fun stuff starts happening after 3-4 year of learning; also ... the memory model needs to extend to those long intervals (see "stabilization curve"). All in all, it is interestingly only for memory researchers, because I said in 1993 (in an interview) that further progress will not make much difference to users. They have much bigger problems to tackle (like a hungry baby in the other room). In other words, the art of perfect optimizations is only for you and me 😉 I study memory. You study algorithms. Perfect! 🙂 | |||
* what are your plans after retirement. Open source the code? | |||
** Larry King: "Retire to what?" 🙂 | |||
** someone suggested opening above in a very sensible incremental way; the main inhibitor was lack of tangible benefit for invested time; the suggestion above has a good bang/minute | |||
* Do you use WhatsApp or any kind of messaging applications.. I'm curious to know | |||
** I went to Discord for kids! I get requests to get on WhatsApp but ... they never come from kids! 🙂 ... Kids get a credit for the Round Table of education. We were looking for some expensive location with partipant vacation and kids said: "why not Discord". We did it for free! I love kids! | |||
* how often do you communicate with ChatGPT? Do you think it would change the education or learning hugely? | |||
** use Copilots/Bing some 20-40 times per day; ChatGPT for language things (esp. Polish 🙂 and Bard/Gemini are still immature 🙂 | |||
* Do you think relying on LLMs for generating Q&As or adjusting formulation will have an impact on users' recall and performance in the long-term? (as in skipping a step in the process) | |||
** I would never want anyone play with my items. There is some mnemonic magic in doing it yourself, however, I know you learn languages, so you have more room for automation there. You will have to see for yourself 🙂 (and write an article! 🙂 | |||
* Is the algorithm used in SuperMemo 19 the same as in SuperMemo 18? If no, will the wiki be updated? | |||
SM-19 might include some bug fixes. inessential | |||
* In one of your documents you made a note that it might not be optimal for the algo to anticipate 90% forgetting index for each interval if the requested forgetting index is 90%. You argue that for mature material with a long interval, it costs more to start again from the beginning after a lapse than for material with smaller intervals. In other words, it should be considered whether SM should automatically increase the forgetting index for material with long intervals, e.g. from 90% to 95% (dynamically, of course). Have you thought about this further? | |||
** | |||
* Do you think the minimum definition of IR needs some revision or updating? | |||
** nothing comes to mind. if you create open source user-friendly software doing that minimum, you might change the world 🙂 | |||
* Have you ever reset any collection parameters? when do you think a user will need to use this feature? | |||
* Some cards have similar contents. They could intervene or remind each other, such as clozes from the same topic. Did you do some research about their memory patterns? | |||
* My work involves learning such formula.. For actuarial. Science.. how can we write such formulas in SM.. That would solve all my learning issues at present | |||
** yes. I bet supermemo is horrible for a mathematician (so says Paul R.). I flatten formulas to simple programming language or verbal descriptions. As I am not at school, I can use any notation I wish, so I can only empathize. Other users will probably tell you more fitting approaches | |||
* Would there be a 4th component of long-term memory such as "learntropy" at the time of the repetition? How do you compare different levels of learntropy for the user across time? or when comparing users: the same difficulty but different learntropy? i.e. could learntrophy be estimated or computed, so weighted in the algorithm? | |||
* In one of your documents you made a note that it might not be optimal for the algo to anticipate 90% forgetting index for each interval if the requested forgetting index is 90%. You argue that for mature material with a long interval, it costs more to start again from the beginning after a lapse than for material with smaller intervals. In other words, it should be considered whether SM should automatically increase the forgetting index for material with long intervals, e.g. from 90% to 95% (dynamically, of course). Have you thought about this further? | |||
** I have a procedure for simulating that! it was never opened in SuperMemo. You revive my passion for algorithms 🙂 | |||
*** You simulate this manually in SM? By manually adjusting the requested FI? | |||
**** No. You optimize the algorithm by adding one extra dimension: requested forgetting index. With your question, I change my answer: Paul Robichaud suggested that improvement first time we met (Aug 2016). Now with alg-sm17 his suggestion is finally possible. So the answer to the questions: "Can we improve the algorithm further?" is ... yes. If we change the definition of the goals. Instead of having FI=10%, you may look for lifetime benefit (defined in some crafty way) 🙂 | |||
BEST QUESTION ... social concept network is the answer. Even if this is the best Q, we should move it to the last 10 min. or Part II (this is a broad and important topic that might squeeze out oxygen from the meeting) | BEST QUESTION ... social concept network is the answer. Even if this is the best Q, we should move it to the last 10 min. or Part II (this is a broad and important topic that might squeeze out oxygen from the meeting) |
Revision as of 17:03, 17 February 2024
Pre-event text-based Q&A
Piotr Wozniak answered some questions before the start of the event.
- When to open source SM?
- open source is a matter of community, not individuals
- Yes, but if all is closed code, then the open community has to start at zero. As I understand it, the question is whether SM could offer interfaces for open source that could be built upon without giving away the secrets
- the best resolution to chicken-and-egg is to start small with alg-sm2, and build momentum by proving the strength of the community; the Alg is not the most important part of such a project; with Jarette, the last excuse is gone 🙂
- Yes, but if all is closed code, then the open community has to start at zero. As I understand it, the question is whether SM could offer interfaces for open source that could be built upon without giving away the secrets
- open source is a matter of community, not individuals
- How do you feel about FSRS?
- feel great!
- Are you aware of FRSRS?
- aware, appreciating, crossing fingers but buried in the priority queue.
- Why do you hate your customers?
- best question! if no better shows up, I will start from this. (Note, he didn't).
- What’s the priority algorithm of SuperMemo? Will Woz write documents for the priority algorithm like he has written for the memory algorithm?
- documenting it in detail would be very expensive, but we can address individuals (dis)likes
- What is the future direction of development for SM? Better algorithms or something else?
- nothing revolutionary on 5000 task tasklist; today progress feels incremental; it always does until a big thing shows up, or little thing turns out to be a big thing
- What is the main focus of woz's investment in the near future? Spring of Students? SM development? Or something else?
- the biggest thing is "freedom for the brains"
- How many SM-programmers do you currently employ?
- 5.
- Note: I think 5 developers is for supermemo World company, which does not mean 5 people work on SM for windows.
- 5.
- Maybe I missed it. I have read it just now. I'm not very interest to continue the comparison between FSRS and SuperMemo, because only Woz could implement SuperMemo.
- implement "SuperMemo" sounds intimidating; anyone can implement a simple algorithm based on 3-component model of memory; only data sets are an issue
- In the last event Woz was quite enthusiastic about finding ways for the community to contribute more directly to SM. APIs, plugins, even open source were discussed. What's the current state of this topic within the SM team, any progress, are they still open to/interested in the idea?
- BEST QUESTION ... social concept network is the answer. Even if this is the best Q, we should move it to the last 10 min. or Part II (this is a broad and important topic that might squeeze out oxygen from the meeting)
- Will SM20 have some sync capabilities with the phone app of SM world? Based on https://supermemopedia.com/index.php?title=Synchronization_with_supermemo.com&diff=33992&oldid=33991
- there is only one weak cog in the machine: Woz and his tasklists, Plans and priorities; perfection can be crippling 🙂 ... PUSH ME!
- When will the public scheduling API become available?
- Can the functionality of YouTube.htm be kept the same in future versions of SM to allow users to continue to write their own incremental video scripts?
- I believe that it is possible to implement a functional API within SM by using the techniques you have already implemented for YouTube.htm (aka reading the <input> and <option> fields). Would you be interested in providing source code for portions of SM related to YouTubeVideo.htm, and how MSHTML is used to render Elements? Based on my findings, you could provide an API without a major refactoring of the source code by adding more <input> fields user scripts can toggle.
- I am impressed! The brain treats it like an instant subconscious boost to priority!
- For what purpose do you use these arrows? ("to the beginning of the collection"; "to the end of the collection") Do you misuse it as a quick switch to favourites? Or what purpose do you have for it? I don't think I have ever used these outer arrows.
- I do not use those buttons either, but they were born in 1995 following a multimedia courseware standard; for multimedia courses they were like maximize/normal/minimize in windows
- Yeah, I agree with that. FSRS is also based on DSR model. The main bottleneck is the optimization. SuperMemo has many matrixes like R-matrix to optimize the parameters of DSR model. They are complicated.
- yes. with your good knowledge of ML, you can forget matrices and theory; just stick to the model; for me the biggest value of alg-sm17 is not efficiency, but insights into the model itself; I can say that with a solid re-afirmation of theory, my whole life changed because see brains in a totally different way than 99.99% of the planet
- What do you mean with "data sets are an issue"? Can you explain the issues?
- I loved my 2-3 years with alg-sm17 because I could play like a baby with my 30 years of data, pondering individual items and their history and how their intertwine with personal life; without that factor, the job would get far more boring and technical. If you have a good theoretical model, all you need is a good dataset (I hear mnemosyne has the largest, but that's not incremental reading, so it would fit better Anki-like work mode)
- There is the question of what is the most important thing to consider when buying flats. The answer is "the location and the location and the location". I think the most important thing in freelearning is the quality (3x) of the learning material and the didactic preparation. And unfortunately SM does not help here.
- I often learn from horrible sources. If you find a golden nugget, after 20 min. of timewasting, it may still pay. With chatbots now, all sources seem digestible, however bad. All we care about is good ideas inside!
- Does Woz know about mnemonic medium? It's an augmented reading environment supported by spaced repetition. I think it has many common ground with the incremental reading of SuperMemo.
Link: https://notes.andymatuschak.org/zPNoWjs5jQ7iMDpxxBtjSbg
- ask Biedalak in case I mislead but ... "The dataset for the FSRS benchmark comes from 20 thousand people who use Anki, a flashcard app. In total, this dataset contains information about ~1.5 billion reviews of flashcards" ... with supermemo.com or supermemo.net there was/is a horrible bias in data: lot of people who start and quit; those who do not stick to 20 rules; in other words, lots of noise; with Murakowski we "quarreled" a lot about this. He wanted "big data", while I insisted on going from person to person, starting with myself, and include crafty long-term users with a constellation of habits. You seem to be in Murakowski camp 🙂 I insist that lots of fun stuff starts happening after 3-4 year of learning; also ... the memory model needs to extend to those long intervals (see "stabilization curve"). All in all, it is interestingly only for memory researchers, because I said in 1993 (in an interview) that further progress will not make much difference to users. They have much bigger problems to tackle (like a hungry baby in the other room). In other words, the art of perfect optimizations is only for you and me 😉 I study memory. You study algorithms. Perfect! 🙂
- what are your plans after retirement. Open source the code?
- Larry King: "Retire to what?" 🙂
- someone suggested opening above in a very sensible incremental way; the main inhibitor was lack of tangible benefit for invested time; the suggestion above has a good bang/minute
- Do you use WhatsApp or any kind of messaging applications.. I'm curious to know
- I went to Discord for kids! I get requests to get on WhatsApp but ... they never come from kids! 🙂 ... Kids get a credit for the Round Table of education. We were looking for some expensive location with partipant vacation and kids said: "why not Discord". We did it for free! I love kids!
- how often do you communicate with ChatGPT? Do you think it would change the education or learning hugely?
- use Copilots/Bing some 20-40 times per day; ChatGPT for language things (esp. Polish 🙂 and Bard/Gemini are still immature 🙂
- Do you think relying on LLMs for generating Q&As or adjusting formulation will have an impact on users' recall and performance in the long-term? (as in skipping a step in the process)
- I would never want anyone play with my items. There is some mnemonic magic in doing it yourself, however, I know you learn languages, so you have more room for automation there. You will have to see for yourself 🙂 (and write an article! 🙂
- Is the algorithm used in SuperMemo 19 the same as in SuperMemo 18? If no, will the wiki be updated?
SM-19 might include some bug fixes. inessential
- In one of your documents you made a note that it might not be optimal for the algo to anticipate 90% forgetting index for each interval if the requested forgetting index is 90%. You argue that for mature material with a long interval, it costs more to start again from the beginning after a lapse than for material with smaller intervals. In other words, it should be considered whether SM should automatically increase the forgetting index for material with long intervals, e.g. from 90% to 95% (dynamically, of course). Have you thought about this further?
- Do you think the minimum definition of IR needs some revision or updating?
- nothing comes to mind. if you create open source user-friendly software doing that minimum, you might change the world 🙂
- Have you ever reset any collection parameters? when do you think a user will need to use this feature?
- Some cards have similar contents. They could intervene or remind each other, such as clozes from the same topic. Did you do some research about their memory patterns?
- My work involves learning such formula.. For actuarial. Science.. how can we write such formulas in SM.. That would solve all my learning issues at present
- yes. I bet supermemo is horrible for a mathematician (so says Paul R.). I flatten formulas to simple programming language or verbal descriptions. As I am not at school, I can use any notation I wish, so I can only empathize. Other users will probably tell you more fitting approaches
- Would there be a 4th component of long-term memory such as "learntropy" at the time of the repetition? How do you compare different levels of learntropy for the user across time? or when comparing users: the same difficulty but different learntropy? i.e. could learntrophy be estimated or computed, so weighted in the algorithm?
- In one of your documents you made a note that it might not be optimal for the algo to anticipate 90% forgetting index for each interval if the requested forgetting index is 90%. You argue that for mature material with a long interval, it costs more to start again from the beginning after a lapse than for material with smaller intervals. In other words, it should be considered whether SM should automatically increase the forgetting index for material with long intervals, e.g. from 90% to 95% (dynamically, of course). Have you thought about this further?
- I have a procedure for simulating that! it was never opened in SuperMemo. You revive my passion for algorithms 🙂
- You simulate this manually in SM? By manually adjusting the requested FI?
- No. You optimize the algorithm by adding one extra dimension: requested forgetting index. With your question, I change my answer: Paul Robichaud suggested that improvement first time we met (Aug 2016). Now with alg-sm17 his suggestion is finally possible. So the answer to the questions: "Can we improve the algorithm further?" is ... yes. If we change the definition of the goals. Instead of having FI=10%, you may look for lifetime benefit (defined in some crafty way) 🙂
- You simulate this manually in SM? By manually adjusting the requested FI?
- I have a procedure for simulating that! it was never opened in SuperMemo. You revive my passion for algorithms 🙂
BEST QUESTION ... social concept network is the answer. Even if this is the best Q, we should move it to the last 10 min. or Part II (this is a broad and important topic that might squeeze out oxygen from the meeting)
Main topics that were discussed:
Supermemo API
AA
AA