https://learning.subwiki.org/w/api.php?action=feedcontributions&user=Issa+Rice&feedformat=atomLearning - User contributions [en]2019-09-21T02:41:31ZUser contributionsMediaWiki 1.29.2https://learning.subwiki.org/w/index.php?title=Understanding_mathematical_definitions&diff=778Understanding mathematical definitions2019-07-22T00:22:09Z<p>Issa Rice: /* List of steps */</p>
<hr />
<div>'''Understanding mathematical definitions''' refers to the process of understanding the meaning of definitions in mathematics.<br />
<br />
==List of steps==<br />
<br />
Understanding a definition in mathematics is a complicated and laborious process. The following table summarizes some of the things one might do when trying to understand a new definition.<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Step !! Condition !! Description !! Purpose !! Examples<br />
|-<br />
| Type-checking and parsing || || Parse each expression in the definition and understand its type. || It's easy to become confused when you don't know the meanings of expressions used in a definition. So the idea is to avoid this kind of error. || [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation]<br />
|-<br />
| Checking assumptions of objects introduced || || Remove or alter each assumption of the objects that have been introduced in the definition to see why they are necessary. || Generally you want definitions to be "expansive" in the sense of applying to many different objects. But each assumption you introduce whittles down the number of objects the definition applies to. In other words, there is tension between (1) trying to have expansive definitions, and (2) adding in assumptions/restrictions in a definition. So you want to make sure each assumption [https://wiki.lesswrong.com/wiki/Making_beliefs_pay_rent pays its rent] so that you don't make a definition narrower than it needs to be. || In the definition of convergence of a function at a point, Tao requires that <math>x_0</math> must be adherent to <math>E</math>. He then says that it is not worthwhile to define convergence when <math>x_0</math> is not adherent to <math>E</math>. (The idea is for the reader to make sure they understand why this assumption is good to have.)<br />
|-<br />
| Coming up with examples || || Come up with some examples of objects that fit the definition. Emphasize edge cases. || Examples help to train your intuition of what the object "looks like". || For monotone increasing functions, an edge case would be the constant function.<br />
|-<br />
| Coming up with counterexamples || || || As with coming up with examples, the idea is to train your intuition. But with counterexamples, you do it by making sure your conception of what the object "looks like" isn't too inclusive. [https://en.wikipedia.org/wiki/Confirmation_bias#Wason's_research_on_hypothesis-testing] ||<br />
|-<br />
| Writing out a wrong version of the definition || || || || See [https://gowers.wordpress.com/2011/09/30/basic-logic-quantifiers/ this post] by Tim Gowers (search "wrong versions" on the page).<br />
|-<br />
| Understanding the kind of definition || || Generally a definition will do one of the following things: (1) it will construct a brand new type of object (e.g. definition of a ''function''); (2) it will take an existing type of object and create a predicate to describe some subclass of that type of object (e.g. take the integers and create the predicate ''even''); (3) it will define an operation on some class of objects (e.g. take integers and define the operation of ''addition'').<br />
|-<br />
| Checking well-definedness || If the definition defines an operation || || || Checking that addition on the integers is well-defined.<br />
|-<br />
| Checking consistency with existing definition || If the definition supersedes an older definition or it clobbers up a previously defined notation || || || Addition on the reals after addition on the rationals has been defined.<br/><br/>For any function <math>f:X\to Y</math> and <math>U\subset Y</math>, the inverse image <math>f^{-1}(U)</math> is defined. On the other hand, if a function <math>f : X\to Y</math> is a bijection, then <math>f^{-1} : Y \to X</math> is a function, so its forward image <math>f^{-1}(U)</math> is defined given any <math>U\subset Y</math>. We must check that these two are the same set (or else have some way to disambiguate which one we mean). (This example is mentioned in both Tao's ''Analysis I'' and in Munkres's ''Topology''.)<br />
|-<br />
| Disambiguating similar-seeming concepts || || || The idea is that sometimes, two different definitions "step on" the same intuitive concept that someone has. || (Example from Tao) "Disjoint" and "distinct" are both terms that apply to two sets. They even sound similar. Are they the same concept? Does one imply the other? It turns out, the answer is "no" to both: <math>\{1,2\}</math> and <math>\{2,3\}</math> are distinct but not disjoint, and <math>\emptyset</math> and <math>\emptyset</math> are disjoint but not distinct.<br/><br/>Partition of a set vs partition of an interval.<br><br>In metric spaces, the difference between bounded and totally bounded. They are not the same concept in general, but one implies the other, so one should prove an implication and find a counterexample. However, in certain metric spaces (e.g. Euclidean spaces) the two concepts ''are'' identical, so one should prove the equivalence.<br><br>Sequantially compact vs covering compact: equivalent in metric spaces, but not true for more general topological spaces.<br><br>Cauchy sequence vs convergent sequence: equivalent in complete metric spaces, but not equivalent in general (although convergent implies Cauchy in general). However, even incomplete metric spaces can be completed, so the two ideas sort of end up blurring together.<br />
|-<br />
| Googling around/reading alternative texts || || Sometimes a definition is confusingly written (in one textbook) or the concept itself is confusing (e.g. because it is too abstract). It can help to look around for alternative expositions, especially ones that try to explain the intuitions/historical motivations of the definition. See also [[learning from multiple sources]]. || || In mathematical logic, the terminology for formal languages is a mess: some books define a structure as having a domain and an interpretation (so structure = (domain, interpretation)), while others define the same thing as interpretation = (domain, denotations), while still others define it as structure = (domain, signature, interpretation). The result is that in order to not be confused when e.g. reading an article online, one must become familiar with a range of definitions/terminology for the same concepts and be able to quickly adjust to the intended one in a given context.<br><br>To give another example from mathematical logic, there is the expresses vs captures distinction. But different books use terminology like arithmetically defines vs defines, represents vs expresses, etc. So again things are a mess.<br />
|-<br />
| Drawing a picture || Ideally ask this about every definition. But some subfields of math (e.g. analysis) are a lot more visual than others (e.g. mathematical logic). || || || Pugh's ''Real Mathematical Analysis'', Needham's ''Visual Complex Analysis''.<br />
|-<br />
| Chunking/processing level by level || If a definition involves multiple layers of quantifiers. || || || See Tao's definitions for <math>\varepsilon</math>-close, eventually <math>\varepsilon</math>-close, <math>\varepsilon</math>-adherent, etc.<br />
|-<br />
| Asking some stock questions for a given field || || || || In computability theory, you should always be asking "Is this function total or partial?" or else you risk becoming confused.<br><br>In linear algebra (when done in a coordinate-free way) one should always ask "is this vector space finite-dimensional?"<br><br>I think some other fields also have this kind of question that you should always be asking of objects.<br />
|}<br />
<br />
==Ways to speed things up==<br />
<br />
There are several ways to speed up/skip steps in the above table, so that one doesn't spend too much time on definitions.<br />
<br />
===Lazy understanding===<br />
<br />
One idea is to skip trying to really grok a definition at first, and see what bad things might happen. The idea is to then only come back to the definition when one needs details from it. This is similar to [[wikipedia:Lazy evaluation|lazy evaluation]] in programming.<br />
<br />
===Building off similar definitions===<br />
<br />
If a similar definition has just been defined (and one has taken the time to understand it), a similar definition will not need as much time to understand (one only needs to focus on the differences between the two definitions). For instance, after one has understood set union, one can relatively quickly understand set intersection.<br />
<br />
===Relying on experience and intuition===<br />
<br />
Eventually, after one has studied a lot of mathematics, understanding definitions becomes more automatic. One can gain an intuition of which steps are important for a particular definition, or when to spend some time and when to move quickly. One naturally asks the important questions, and can let curiosity guide one's exploration.<br />
<br />
==When reading textbooks==<br />
<br />
Most textbooks will assume the audience is a competent mathematician, so won't bother to explain what you should be doing at each definition.<br />
<br />
In definitions, it is traditional to use "if" to mean "if and only if". (Some authors use "iff" in definitions.)<br />
<br />
==See also==<br />
<br />
* [[Understanding theorems]]<br />
<br />
==External links==<br />
<br />
* http://www.abstractmath.org/MM/MMDefs.htm<br />
* https://www.maa.org/node/121566 lists some other steps for both theorems and definitions<br />
* https://en.wikipedia.org/wiki/Reverse_mathematics -- this one is more important for [[understanding theorems]]. But the idea is to think, for each theorem, its place in the structure of the theory/relationship to other theorems. see for example https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers#Forms_of_completeness and https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents and https://en.wikipedia.org/wiki/Mathematical_induction#Equivalence_with_the_well-ordering_principle John Stillwell (who also wrote ''Mathematics and Its History'') has a book called ''Reverse Mathematics'' that might explain this at an accessible level.<br />
* https://gowers.wordpress.com/2011/10/23/definitions/<br />
* https://gowers.wordpress.com/2011/10/25/alternative-definitions/<br />
* I think Tim Gowers's [https://gowers.wordpress.com/category/cambridge-teaching/basic-logic/ basic logic] series of blog posts has some discussions about definitions<br />
* https://www.google.com/search?q=%22There%20are%20good%20reasons%20why%20the%20theorems%20should%20all%20be%20easy%20and%20the%20definitions%20hard.%22<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Understanding_mathematical_definitions&diff=777Understanding mathematical definitions2019-07-22T00:18:17Z<p>Issa Rice: /* Relying on experience and intuition */</p>
<hr />
<div>'''Understanding mathematical definitions''' refers to the process of understanding the meaning of definitions in mathematics.<br />
<br />
==List of steps==<br />
<br />
Understanding a definition in mathematics is a complicated and laborious process. The following table summarizes some of the things one might do when trying to understand a new definition.<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Step !! Condition !! Description !! Purpose !! Examples<br />
|-<br />
| Type-checking and parsing || || Parse each expression in the definition and understand its type. || It's easy to become confused when you don't know the meanings of expressions used in a definition. So the idea is to avoid this kind of error. || [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation]<br />
|-<br />
| Checking assumptions of objects introduced || || Remove or alter each assumption of the objects that have been introduced in the definition to see why they are necessary. || Generally you want definitions to be "expansive" in the sense of applying to many different objects. But each assumption you introduce whittles down the number of objects the definition applies to. In other words, there is tension between (1) trying to have expansive definitions, and (2) adding in assumptions/restrictions in a definition. So you want to make sure each assumption [https://wiki.lesswrong.com/wiki/Making_beliefs_pay_rent pays its rent] so that you don't make a definition narrower than it needs to be. || In the definition of convergence of a function at a point, Tao requires that <math>x_0</math> must be adherent to <math>E</math>. He then says that it is not worthwhile to define convergence when <math>x_0</math> is not adherent to <math>E</math>. (The idea is for the reader to make sure they understand why this assumption is good to have.)<br />
|-<br />
| Coming up with examples || || Come up with some examples of objects that fit the definition. Emphasize edge cases. || Examples help to train your intuition of what the object "looks like". || For monotone increasing functions, an edge case would be the constant function.<br />
|-<br />
| Coming up with counterexamples || || || As with coming up with examples, the idea is to train your intuition. But with counterexamples, you do it by making sure your conception of what the object "looks like" isn't too inclusive. [https://en.wikipedia.org/wiki/Confirmation_bias#Wason's_research_on_hypothesis-testing] ||<br />
|-<br />
| Writing out a wrong version of the definition || || || || See [https://gowers.wordpress.com/2011/09/30/basic-logic-quantifiers/ this post] by Tim Gowers (search "wrong versions" on the page).<br />
|-<br />
| Understanding the kind of definition || || Generally a definition will do one of the following things: (1) it will construct a brand new type of object (e.g. definition of a ''function''); (2) it will take an existing type of object and create a predicate to describe some subclass of that type of object (e.g. take the integers and create the predicate ''even''); (3) it will define an operation on some class of objects (e.g. take integers and define the operation of ''addition'').<br />
|-<br />
| Checking well-definedness || If the definition defines an operation || || || Checking that addition on the integers is well-defined.<br />
|-<br />
| Checking consistency with existing definition || If the definition supersedes an older definition or it clobbers up a previously defined notation || || || Addition on the reals after addition on the rationals has been defined.<br/><br/>For any function <math>f:X\to Y</math> and <math>U\subset Y</math>, the inverse image <math>f^{-1}(U)</math> is defined. On the other hand, if a function <math>f : X\to Y</math> is a bijection, then <math>f^{-1} : Y \to X</math> is a function, so its forward image <math>f^{-1}(U)</math> is defined given any <math>U\subset Y</math>. We must check that these two are the same set (or else have some way to disambiguate which one we mean). (This example is mentioned in both Tao's ''Analysis I'' and in Munkres's ''Topology''.)<br />
|-<br />
| Disambiguating similar-seeming concepts || || || The idea is that sometimes, two different definitions "step on" the same intuitive concept that someone has. || (Example from Tao) "Disjoint" and "distinct" are both terms that apply to two sets. They even sound similar. Are they the same concept? Does one imply the other? It turns out, the answer is "no" to both: <math>\{1,2\}</math> and <math>\{2,3\}</math> are distinct but not disjoint, and <math>\emptyset</math> and <math>\emptyset</math> are disjoint but not distinct.<br/><br/>Partition of a set vs partition of an interval.<br><br>In metric spaces, the difference between bounded and totally bounded. They are not the same concept in general, but one implies the other, so one should prove an implication and find a counterexample. However, in certain metric spaces (e.g. Euclidean spaces) the two concepts ''are'' identical, so one should prove the equivalence.<br><br>Sequantially compact vs covering compact: equivalent in metric spaces, but not true for more general topological spaces.<br><br>Cauchy sequence vs convergent sequence: equivalent in complete metric spaces, but not equivalent in general (although convergent implies Cauchy in general). However, even incomplete metric spaces can be completed, so the two ideas sort of end up blurring together.<br />
|-<br />
| Googling around/reading alternative texts || || Sometimes a definition is confusingly written (in one textbook) or the concept itself is confusing (e.g. because it is too abstract). It can help to look around for alternative expositions, especially ones that try to explain the intuitions/historical motivations of the definition. See also [[learning from multiple sources]]. || || In mathematical logic, the terminology for formal languages is a mess: some books define a structure as having a domain and an interpretation (so structure = (domain, interpretation)), while others define the same thing as interpretation = (domain, denotations), while still others define it as structure = (domain, signature, interpretation). The result is that in order to not be confused when e.g. reading an article online, one must become familiar with a range of definitions/terminology for the same concepts and be able to quickly adjust to the intended one in a given context.<br><br>To give another example from mathematical logic, there is the expresses vs captures distinction. But different books use terminology like arithmetically defines vs defines, represents vs expresses, etc. So again things are a mess.<br />
|-<br />
| Drawing a picture || || || || Pugh's ''Real Mathematical Analysis'', Needham's ''Visual Complex Analysis''.<br />
|-<br />
| Chunking/processing level by level || If a definition involves multiple layers of quantifiers. || || || See Tao's definitions for <math>\varepsilon</math>-close, eventually <math>\varepsilon</math>-close, <math>\varepsilon</math>-adherent, etc.<br />
|-<br />
| Asking some stock questions for a given field || || || || In computability theory, you should always be asking "Is this function total or partial?" or else you risk becoming confused.<br><br>In linear algebra (when done in a coordinate-free way) one should always ask "is this vector space finite-dimensional?"<br><br>I think some other fields also have this kind of question that you should always be asking of objects.<br />
|}<br />
<br />
==Ways to speed things up==<br />
<br />
There are several ways to speed up/skip steps in the above table, so that one doesn't spend too much time on definitions.<br />
<br />
===Lazy understanding===<br />
<br />
One idea is to skip trying to really grok a definition at first, and see what bad things might happen. The idea is to then only come back to the definition when one needs details from it. This is similar to [[wikipedia:Lazy evaluation|lazy evaluation]] in programming.<br />
<br />
===Building off similar definitions===<br />
<br />
If a similar definition has just been defined (and one has taken the time to understand it), a similar definition will not need as much time to understand (one only needs to focus on the differences between the two definitions). For instance, after one has understood set union, one can relatively quickly understand set intersection.<br />
<br />
===Relying on experience and intuition===<br />
<br />
Eventually, after one has studied a lot of mathematics, understanding definitions becomes more automatic. One can gain an intuition of which steps are important for a particular definition, or when to spend some time and when to move quickly. One naturally asks the important questions, and can let curiosity guide one's exploration.<br />
<br />
==When reading textbooks==<br />
<br />
Most textbooks will assume the audience is a competent mathematician, so won't bother to explain what you should be doing at each definition.<br />
<br />
In definitions, it is traditional to use "if" to mean "if and only if". (Some authors use "iff" in definitions.)<br />
<br />
==See also==<br />
<br />
* [[Understanding theorems]]<br />
<br />
==External links==<br />
<br />
* http://www.abstractmath.org/MM/MMDefs.htm<br />
* https://www.maa.org/node/121566 lists some other steps for both theorems and definitions<br />
* https://en.wikipedia.org/wiki/Reverse_mathematics -- this one is more important for [[understanding theorems]]. But the idea is to think, for each theorem, its place in the structure of the theory/relationship to other theorems. see for example https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers#Forms_of_completeness and https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents and https://en.wikipedia.org/wiki/Mathematical_induction#Equivalence_with_the_well-ordering_principle John Stillwell (who also wrote ''Mathematics and Its History'') has a book called ''Reverse Mathematics'' that might explain this at an accessible level.<br />
* https://gowers.wordpress.com/2011/10/23/definitions/<br />
* https://gowers.wordpress.com/2011/10/25/alternative-definitions/<br />
* I think Tim Gowers's [https://gowers.wordpress.com/category/cambridge-teaching/basic-logic/ basic logic] series of blog posts has some discussions about definitions<br />
* https://www.google.com/search?q=%22There%20are%20good%20reasons%20why%20the%20theorems%20should%20all%20be%20easy%20and%20the%20definitions%20hard.%22<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Understanding_mathematical_definitions&diff=776Understanding mathematical definitions2019-07-22T00:15:47Z<p>Issa Rice: /* List of steps */</p>
<hr />
<div>'''Understanding mathematical definitions''' refers to the process of understanding the meaning of definitions in mathematics.<br />
<br />
==List of steps==<br />
<br />
Understanding a definition in mathematics is a complicated and laborious process. The following table summarizes some of the things one might do when trying to understand a new definition.<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Step !! Condition !! Description !! Purpose !! Examples<br />
|-<br />
| Type-checking and parsing || || Parse each expression in the definition and understand its type. || It's easy to become confused when you don't know the meanings of expressions used in a definition. So the idea is to avoid this kind of error. || [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation]<br />
|-<br />
| Checking assumptions of objects introduced || || Remove or alter each assumption of the objects that have been introduced in the definition to see why they are necessary. || Generally you want definitions to be "expansive" in the sense of applying to many different objects. But each assumption you introduce whittles down the number of objects the definition applies to. In other words, there is tension between (1) trying to have expansive definitions, and (2) adding in assumptions/restrictions in a definition. So you want to make sure each assumption [https://wiki.lesswrong.com/wiki/Making_beliefs_pay_rent pays its rent] so that you don't make a definition narrower than it needs to be. || In the definition of convergence of a function at a point, Tao requires that <math>x_0</math> must be adherent to <math>E</math>. He then says that it is not worthwhile to define convergence when <math>x_0</math> is not adherent to <math>E</math>. (The idea is for the reader to make sure they understand why this assumption is good to have.)<br />
|-<br />
| Coming up with examples || || Come up with some examples of objects that fit the definition. Emphasize edge cases. || Examples help to train your intuition of what the object "looks like". || For monotone increasing functions, an edge case would be the constant function.<br />
|-<br />
| Coming up with counterexamples || || || As with coming up with examples, the idea is to train your intuition. But with counterexamples, you do it by making sure your conception of what the object "looks like" isn't too inclusive. [https://en.wikipedia.org/wiki/Confirmation_bias#Wason's_research_on_hypothesis-testing] ||<br />
|-<br />
| Writing out a wrong version of the definition || || || || See [https://gowers.wordpress.com/2011/09/30/basic-logic-quantifiers/ this post] by Tim Gowers (search "wrong versions" on the page).<br />
|-<br />
| Understanding the kind of definition || || Generally a definition will do one of the following things: (1) it will construct a brand new type of object (e.g. definition of a ''function''); (2) it will take an existing type of object and create a predicate to describe some subclass of that type of object (e.g. take the integers and create the predicate ''even''); (3) it will define an operation on some class of objects (e.g. take integers and define the operation of ''addition'').<br />
|-<br />
| Checking well-definedness || If the definition defines an operation || || || Checking that addition on the integers is well-defined.<br />
|-<br />
| Checking consistency with existing definition || If the definition supersedes an older definition or it clobbers up a previously defined notation || || || Addition on the reals after addition on the rationals has been defined.<br/><br/>For any function <math>f:X\to Y</math> and <math>U\subset Y</math>, the inverse image <math>f^{-1}(U)</math> is defined. On the other hand, if a function <math>f : X\to Y</math> is a bijection, then <math>f^{-1} : Y \to X</math> is a function, so its forward image <math>f^{-1}(U)</math> is defined given any <math>U\subset Y</math>. We must check that these two are the same set (or else have some way to disambiguate which one we mean). (This example is mentioned in both Tao's ''Analysis I'' and in Munkres's ''Topology''.)<br />
|-<br />
| Disambiguating similar-seeming concepts || || || The idea is that sometimes, two different definitions "step on" the same intuitive concept that someone has. || (Example from Tao) "Disjoint" and "distinct" are both terms that apply to two sets. They even sound similar. Are they the same concept? Does one imply the other? It turns out, the answer is "no" to both: <math>\{1,2\}</math> and <math>\{2,3\}</math> are distinct but not disjoint, and <math>\emptyset</math> and <math>\emptyset</math> are disjoint but not distinct.<br/><br/>Partition of a set vs partition of an interval.<br><br>In metric spaces, the difference between bounded and totally bounded. They are not the same concept in general, but one implies the other, so one should prove an implication and find a counterexample. However, in certain metric spaces (e.g. Euclidean spaces) the two concepts ''are'' identical, so one should prove the equivalence.<br><br>Sequantially compact vs covering compact: equivalent in metric spaces, but not true for more general topological spaces.<br><br>Cauchy sequence vs convergent sequence: equivalent in complete metric spaces, but not equivalent in general (although convergent implies Cauchy in general). However, even incomplete metric spaces can be completed, so the two ideas sort of end up blurring together.<br />
|-<br />
| Googling around/reading alternative texts || || Sometimes a definition is confusingly written (in one textbook) or the concept itself is confusing (e.g. because it is too abstract). It can help to look around for alternative expositions, especially ones that try to explain the intuitions/historical motivations of the definition. See also [[learning from multiple sources]]. || || In mathematical logic, the terminology for formal languages is a mess: some books define a structure as having a domain and an interpretation (so structure = (domain, interpretation)), while others define the same thing as interpretation = (domain, denotations), while still others define it as structure = (domain, signature, interpretation). The result is that in order to not be confused when e.g. reading an article online, one must become familiar with a range of definitions/terminology for the same concepts and be able to quickly adjust to the intended one in a given context.<br><br>To give another example from mathematical logic, there is the expresses vs captures distinction. But different books use terminology like arithmetically defines vs defines, represents vs expresses, etc. So again things are a mess.<br />
|-<br />
| Drawing a picture || || || || Pugh's ''Real Mathematical Analysis'', Needham's ''Visual Complex Analysis''.<br />
|-<br />
| Chunking/processing level by level || If a definition involves multiple layers of quantifiers. || || || See Tao's definitions for <math>\varepsilon</math>-close, eventually <math>\varepsilon</math>-close, <math>\varepsilon</math>-adherent, etc.<br />
|-<br />
| Asking some stock questions for a given field || || || || In computability theory, you should always be asking "Is this function total or partial?" or else you risk becoming confused.<br><br>In linear algebra (when done in a coordinate-free way) one should always ask "is this vector space finite-dimensional?"<br><br>I think some other fields also have this kind of question that you should always be asking of objects.<br />
|}<br />
<br />
==Ways to speed things up==<br />
<br />
There are several ways to speed up/skip steps in the above table, so that one doesn't spend too much time on definitions.<br />
<br />
===Lazy understanding===<br />
<br />
One idea is to skip trying to really grok a definition at first, and see what bad things might happen. The idea is to then only come back to the definition when one needs details from it. This is similar to [[wikipedia:Lazy evaluation|lazy evaluation]] in programming.<br />
<br />
===Building off similar definitions===<br />
<br />
If a similar definition has just been defined (and one has taken the time to understand it), a similar definition will not need as much time to understand (one only needs to focus on the differences between the two definitions). For instance, after one has understood set union, one can relatively quickly understand set intersection.<br />
<br />
===Relying on experience and intuition===<br />
<br />
Eventually, after one has studied a lot of mathematics, understanding definitions becomes more automatic. One can gain an intuition of which steps are important for a particular definition, or when to spend some time and when to move quickly.<br />
<br />
==When reading textbooks==<br />
<br />
Most textbooks will assume the audience is a competent mathematician, so won't bother to explain what you should be doing at each definition.<br />
<br />
In definitions, it is traditional to use "if" to mean "if and only if". (Some authors use "iff" in definitions.)<br />
<br />
==See also==<br />
<br />
* [[Understanding theorems]]<br />
<br />
==External links==<br />
<br />
* http://www.abstractmath.org/MM/MMDefs.htm<br />
* https://www.maa.org/node/121566 lists some other steps for both theorems and definitions<br />
* https://en.wikipedia.org/wiki/Reverse_mathematics -- this one is more important for [[understanding theorems]]. But the idea is to think, for each theorem, its place in the structure of the theory/relationship to other theorems. see for example https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers#Forms_of_completeness and https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents and https://en.wikipedia.org/wiki/Mathematical_induction#Equivalence_with_the_well-ordering_principle John Stillwell (who also wrote ''Mathematics and Its History'') has a book called ''Reverse Mathematics'' that might explain this at an accessible level.<br />
* https://gowers.wordpress.com/2011/10/23/definitions/<br />
* https://gowers.wordpress.com/2011/10/25/alternative-definitions/<br />
* I think Tim Gowers's [https://gowers.wordpress.com/category/cambridge-teaching/basic-logic/ basic logic] series of blog posts has some discussions about definitions<br />
* https://www.google.com/search?q=%22There%20are%20good%20reasons%20why%20the%20theorems%20should%20all%20be%20easy%20and%20the%20definitions%20hard.%22<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Understanding_mathematical_definitions&diff=775Understanding mathematical definitions2019-07-22T00:12:10Z<p>Issa Rice: /* List of steps */</p>
<hr />
<div>'''Understanding mathematical definitions''' refers to the process of understanding the meaning of definitions in mathematics.<br />
<br />
==List of steps==<br />
<br />
Understanding a definition in mathematics is a complicated and laborious process. The following table summarizes some of the things one might do when trying to understand a new definition.<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Step !! Condition !! Description !! Purpose !! Examples<br />
|-<br />
| Type-checking and parsing || || Parse each expression in the definition and understand its type. || It's easy to become confused when you don't know the meanings of expressions used in a definition. So the idea is to avoid this kind of error. || [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation]<br />
|-<br />
| Checking assumptions of objects introduced || || Remove or alter each assumption of the objects that have been introduced in the definition to see why they are necessary. || Generally you want definitions to be "expansive" in the sense of applying to many different objects. But each assumption you introduce whittles down the number of objects the definition applies to. In other words, there is tension between (1) trying to have expansive definitions, and (2) adding in assumptions/restrictions in a definition. So you want to make sure each assumption [https://wiki.lesswrong.com/wiki/Making_beliefs_pay_rent pays its rent] so that you don't make a definition narrower than it needs to be. || In the definition of convergence of a function at a point, Tao requires that <math>x_0</math> must be adherent to <math>E</math>. He then says that it is not worthwhile to define convergence when <math>x_0</math> is not adherent to <math>E</math>. (The idea is for the reader to make sure they understand why this assumption is good to have.)<br />
|-<br />
| Coming up with examples || || Come up with some examples of objects that fit the definition. Emphasize edge cases. || Examples help to train your intuition of what the object "looks like". || For monotone increasing functions, an edge case would be the constant function.<br />
|-<br />
| Coming up with counterexamples || || || As with coming up with examples, the idea is to train your intuition. But with counterexamples, you do it by making sure your conception of what the object "looks like" isn't too inclusive. [https://en.wikipedia.org/wiki/Confirmation_bias#Wason's_research_on_hypothesis-testing] ||<br />
|-<br />
| Writing out a wrong version of the definition || || || || See [https://gowers.wordpress.com/2011/09/30/basic-logic-quantifiers/ this post] by Tim Gowers (search "wrong versions" on the page).<br />
|-<br />
| Understanding the kind of definition || || Generally a definition will do one of the following things: (1) it will construct a brand new type of object (e.g. definition of a ''function''); (2) it will take an existing type of object and create a predicate to describe some subclass of that type of object (e.g. take the integers and create the predicate ''even''); (3) it will define an operation on some class of objects (e.g. take integers and define the operation of ''addition'').<br />
|-<br />
| Checking well-definedness || If the definition defines an operation || || || Checking that addition on the integers is well-defined.<br />
|-<br />
| Checking consistency with existing definition || If the definition supersedes an older definition or it clobbers up a previously defined notation || || || Addition on the reals after addition on the rationals has been defined.<br/><br/>For any function <math>f:X\to Y</math> and <math>U\subset Y</math>, the inverse image <math>f^{-1}(U)</math> is defined. On the other hand, if a function <math>f : X\to Y</math> is a bijection, then <math>f^{-1} : Y \to X</math> is a function, so its forward image <math>f^{-1}(U)</math> is defined given any <math>U\subset Y</math>. We must check that these two are the same set (or else have some way to disambiguate which one we mean). (This example is mentioned in both Tao's ''Analysis I'' and in Munkres's ''Topology''.)<br />
|-<br />
| Disambiguating similar-seeming concepts || || || The idea is that sometimes, two different definitions "step on" the same intuitive concept that someone has. || (Example from Tao) "Disjoint" and "distinct" are both terms that apply to two sets. They even sound similar. Are they the same concept? Does one imply the other? It turns out, the answer is "no" to both: <math>\{1,2\}</math> and <math>\{2,3\}</math> are distinct but not disjoint, and <math>\emptyset</math> and <math>\emptyset</math> are disjoint but not distinct.<br/><br/>Partition of a set vs partition of an interval.<br><br>In metric spaces, the difference between bounded and totally bounded. They are not the same concept in general, but one implies the other, so one should prove an implication and find a counterexample. However, in certain metric spaces (e.g. Euclidean spaces) the two concepts ''are'' identical, so one should prove the equivalence.<br><br>Sequantially compact vs covering compact: equivalent in metric spaces, but not true for more general topological spaces.<br><br>Cauchy sequence vs convergent sequence: equivalent in complete metric spaces, but not equivalent in general (although convergent implies Cauchy in general). However, even incomplete metric spaces can be completed, so the two ideas sort of end up blurring together.<br />
|-<br />
| Googling around/reading alternative texts || || Sometimes a definition is confusingly written (in one textbook) or the concept itself is confusing (e.g. because it is too abstract). It can help to look around for alternative expositions, especially ones that try to explain the intuitions/historical motivations of the definition. See also [[learning from multiple sources]]. || || In mathematical logic, the terminology for formal languages is a mess: some books define a structure as having a domain and an interpretation (so structure = (domain, interpretation)), while others define the same thing as interpretation = (domain, denotations), while still others define it as structure = (domain, signature, interpretation). The result is that in order to not be confused when e.g. reading an article online, one must become familiar with a range of definitions/terminology for the same concepts and be able to quickly adjust to the intended one in a given context.<br><br>To give another example from mathematical logic, there is the expresses vs captures distinction. But different books use terminology like arithmetically defines vs defines, represents vs expresses, etc. So again things are a mess.<br />
|-<br />
| Drawing a picture ||<br />
|-<br />
| Chunking/processing level by level || If a definition involves multiple layers of quantifiers. || || || See Tao's definitions for <math>\varepsilon</math>-close, eventually <math>\varepsilon</math>-close, <math>\varepsilon</math>-adherent, etc.<br />
|-<br />
| Asking some stock questions for a given field || || || || In computability theory, you should always be asking "Is this function total or partial?" or else you risk becoming confused.<br><br>In linear algebra (when done in a coordinate-free way) one should always ask "is this vector space finite-dimensional?"<br><br>I think some other fields also have this kind of question that you should always be asking of objects.<br />
|}<br />
<br />
==Ways to speed things up==<br />
<br />
There are several ways to speed up/skip steps in the above table, so that one doesn't spend too much time on definitions.<br />
<br />
===Lazy understanding===<br />
<br />
One idea is to skip trying to really grok a definition at first, and see what bad things might happen. The idea is to then only come back to the definition when one needs details from it. This is similar to [[wikipedia:Lazy evaluation|lazy evaluation]] in programming.<br />
<br />
===Building off similar definitions===<br />
<br />
If a similar definition has just been defined (and one has taken the time to understand it), a similar definition will not need as much time to understand (one only needs to focus on the differences between the two definitions). For instance, after one has understood set union, one can relatively quickly understand set intersection.<br />
<br />
===Relying on experience and intuition===<br />
<br />
Eventually, after one has studied a lot of mathematics, understanding definitions becomes more automatic. One can gain an intuition of which steps are important for a particular definition, or when to spend some time and when to move quickly.<br />
<br />
==When reading textbooks==<br />
<br />
Most textbooks will assume the audience is a competent mathematician, so won't bother to explain what you should be doing at each definition.<br />
<br />
In definitions, it is traditional to use "if" to mean "if and only if". (Some authors use "iff" in definitions.)<br />
<br />
==See also==<br />
<br />
* [[Understanding theorems]]<br />
<br />
==External links==<br />
<br />
* http://www.abstractmath.org/MM/MMDefs.htm<br />
* https://www.maa.org/node/121566 lists some other steps for both theorems and definitions<br />
* https://en.wikipedia.org/wiki/Reverse_mathematics -- this one is more important for [[understanding theorems]]. But the idea is to think, for each theorem, its place in the structure of the theory/relationship to other theorems. see for example https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers#Forms_of_completeness and https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents and https://en.wikipedia.org/wiki/Mathematical_induction#Equivalence_with_the_well-ordering_principle John Stillwell (who also wrote ''Mathematics and Its History'') has a book called ''Reverse Mathematics'' that might explain this at an accessible level.<br />
* https://gowers.wordpress.com/2011/10/23/definitions/<br />
* https://gowers.wordpress.com/2011/10/25/alternative-definitions/<br />
* I think Tim Gowers's [https://gowers.wordpress.com/category/cambridge-teaching/basic-logic/ basic logic] series of blog posts has some discussions about definitions<br />
* https://www.google.com/search?q=%22There%20are%20good%20reasons%20why%20the%20theorems%20should%20all%20be%20easy%20and%20the%20definitions%20hard.%22<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Understanding_mathematical_definitions&diff=774Understanding mathematical definitions2019-07-22T00:08:52Z<p>Issa Rice: /* List of steps */</p>
<hr />
<div>'''Understanding mathematical definitions''' refers to the process of understanding the meaning of definitions in mathematics.<br />
<br />
==List of steps==<br />
<br />
Understanding a definition in mathematics is a complicated and laborious process. The following table summarizes some of the things one might do when trying to understand a new definition.<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Step !! Condition !! Description !! Purpose !! Examples<br />
|-<br />
| Type-checking and parsing || || Parse each expression in the definition and understand its type. || It's easy to become confused when you don't know the meanings of expressions used in a definition. So the idea is to avoid this kind of error. || [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation]<br />
|-<br />
| Checking assumptions of objects introduced || || Remove or alter each assumption of the objects that have been introduced in the definition to see why they are necessary. || Generally you want definitions to be "expansive" in the sense of applying to many different objects. But each assumption you introduce whittles down the number of objects the definition applies to. In other words, there is tension between (1) trying to have expansive definitions, and (2) adding in assumptions/restrictions in a definition. So you want to make sure each assumption [https://wiki.lesswrong.com/wiki/Making_beliefs_pay_rent pays its rent] so that you don't make a definition narrower than it needs to be. || In the definition of convergence of a function at a point, Tao requires that <math>x_0</math> must be adherent to <math>E</math>. He then says that it is not worthwhile to define convergence when <math>x_0</math> is not adherent to <math>E</math>. (The idea is for the reader to make sure they understand why this assumption is good to have.)<br />
|-<br />
| Coming up with examples || || Come up with some examples of objects that fit the definition. Emphasize edge cases. || Examples help to train your intuition of what the object "looks like". || For monotone increasing functions, an edge case would be the constant function.<br />
|-<br />
| Coming up with counterexamples || || || As with coming up with examples, the idea is to train your intuition. But with counterexamples, you do it by making sure your conception of what the object "looks like" isn't too inclusive. [https://en.wikipedia.org/wiki/Confirmation_bias#Wason's_research_on_hypothesis-testing] ||<br />
|-<br />
| Writing out a wrong version of the definition || || || || See [https://gowers.wordpress.com/2011/09/30/basic-logic-quantifiers/ this post] by Tim Gowers (search "wrong versions" on the page).<br />
|-<br />
| Understanding the kind of definition || || Generally a definition will do one of the following things: (1) it will construct a brand new type of object (e.g. definition of a ''function''); (2) it will take an existing type of object and create a predicate to describe some subclass of that type of object (e.g. take the integers and create the predicate ''even''); (3) it will define an operation on some class of objects (e.g. take integers and define the operation of ''addition'').<br />
|-<br />
| Checking well-definedness || If the definition defines an operation || || || Checking that addition on the integers is well-defined.<br />
|-<br />
| Checking consistency with existing definition || If the definition supersedes an older definition or it clobbers up a previously defined notation || || || Addition on the reals after addition on the rationals has been defined.<br/><br/>For any function <math>f:X\to Y</math> and <math>U\subset Y</math>, the inverse image <math>f^{-1}(U)</math> is defined. On the other hand, if a function <math>f : X\to Y</math> is a bijection, then <math>f^{-1} : Y \to X</math> is a function, so its forward image <math>f^{-1}(U)</math> is defined given any <math>U\subset Y</math>. We must check that these two are the same set (or else have some way to disambiguate which one we mean). (This example is mentioned in both Tao's ''Analysis I'' and in Munkres's ''Topology''.)<br />
|-<br />
| Disambiguating similar-seeming concepts || || || The idea is that sometimes, two different definitions "step on" the same intuitive concept that someone has. || (Example from Tao) "Disjoint" and "distinct" are both terms that apply to two sets. They even sound similar. Are they the same concept? Does one imply the other? It turns out, the answer is "no" to both: <math>\{1,2\}</math> and <math>\{2,3\}</math> are distinct but not disjoint, and <math>\emptyset</math> and <math>\emptyset</math> are disjoint but not distinct.<br/><br/>Partition of a set vs partition of an interval.<br><br>In metric spaces, the difference between bounded and totally bounded. They are not the same concept in general, but one implies the other, so one should prove an implication and find a counterexample. However, in certain metric spaces (e.g. Euclidean spaces) the two concepts ''are'' identical, so one should prove the equivalence.<br><br>Sequantially compact vs covering compact: equivalent in metric spaces, but not true for more general topological spaces.<br />
|-<br />
| Googling around/reading alternative texts || || Sometimes a definition is confusingly written (in one textbook) or the concept itself is confusing (e.g. because it is too abstract). It can help to look around for alternative expositions, especially ones that try to explain the intuitions/historical motivations of the definition. See also [[learning from multiple sources]]. || || In mathematical logic, the terminology for formal languages is a mess: some books define a structure as having a domain and an interpretation (so structure = (domain, interpretation)), while others define the same thing as interpretation = (domain, denotations), while still others define it as structure = (domain, signature, interpretation). The result is that in order to not be confused when e.g. reading an article online, one must become familiar with a range of definitions/terminology for the same concepts and be able to quickly adjust to the intended one in a given context.<br><br>To give another example from mathematical logic, there is the expresses vs captures distinction. But different books use terminology like arithmetically defines vs defines, represents vs expresses, etc. So again things are a mess.<br />
|-<br />
| Drawing a picture ||<br />
|-<br />
| Chunking/processing level by level || If a definition involves multiple layers of quantifiers. || || || See Tao's definitions for <math>\varepsilon</math>-close, eventually <math>\varepsilon</math>-close, <math>\varepsilon</math>-adherent, etc.<br />
|-<br />
| Asking some stock questions for a given field || || || || In computability theory, you should always be asking "Is this function total or partial?" or else you risk becoming confused.<br><br>In linear algebra (when done in a coordinate-free way) one should always ask "is this vector space finite-dimensional?"<br><br>I think some other fields also have this kind of question that you should always be asking of objects.<br />
|}<br />
<br />
==Ways to speed things up==<br />
<br />
There are several ways to speed up/skip steps in the above table, so that one doesn't spend too much time on definitions.<br />
<br />
===Lazy understanding===<br />
<br />
One idea is to skip trying to really grok a definition at first, and see what bad things might happen. The idea is to then only come back to the definition when one needs details from it. This is similar to [[wikipedia:Lazy evaluation|lazy evaluation]] in programming.<br />
<br />
===Building off similar definitions===<br />
<br />
If a similar definition has just been defined (and one has taken the time to understand it), a similar definition will not need as much time to understand (one only needs to focus on the differences between the two definitions). For instance, after one has understood set union, one can relatively quickly understand set intersection.<br />
<br />
===Relying on experience and intuition===<br />
<br />
Eventually, after one has studied a lot of mathematics, understanding definitions becomes more automatic. One can gain an intuition of which steps are important for a particular definition, or when to spend some time and when to move quickly.<br />
<br />
==When reading textbooks==<br />
<br />
Most textbooks will assume the audience is a competent mathematician, so won't bother to explain what you should be doing at each definition.<br />
<br />
In definitions, it is traditional to use "if" to mean "if and only if". (Some authors use "iff" in definitions.)<br />
<br />
==See also==<br />
<br />
* [[Understanding theorems]]<br />
<br />
==External links==<br />
<br />
* http://www.abstractmath.org/MM/MMDefs.htm<br />
* https://www.maa.org/node/121566 lists some other steps for both theorems and definitions<br />
* https://en.wikipedia.org/wiki/Reverse_mathematics -- this one is more important for [[understanding theorems]]. But the idea is to think, for each theorem, its place in the structure of the theory/relationship to other theorems. see for example https://en.wikipedia.org/wiki/Completeness_of_the_real_numbers#Forms_of_completeness and https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents and https://en.wikipedia.org/wiki/Mathematical_induction#Equivalence_with_the_well-ordering_principle John Stillwell (who also wrote ''Mathematics and Its History'') has a book called ''Reverse Mathematics'' that might explain this at an accessible level.<br />
* https://gowers.wordpress.com/2011/10/23/definitions/<br />
* https://gowers.wordpress.com/2011/10/25/alternative-definitions/<br />
* I think Tim Gowers's [https://gowers.wordpress.com/category/cambridge-teaching/basic-logic/ basic logic] series of blog posts has some discussions about definitions<br />
* https://www.google.com/search?q=%22There%20are%20good%20reasons%20why%20the%20theorems%20should%20all%20be%20easy%20and%20the%20definitions%20hard.%22<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Benjamin_Franklin_writing_technique&diff=773Benjamin Franklin writing technique2019-06-10T00:16:40Z<p>Issa Rice: </p>
<hr />
<div>http://www.gutenberg.org/files/148/148-h/148-h.htm start at "Three or four letters of a side had passed, when my father happened to find my papers and read them."<br />
<br />
for a summary, see e.g. https://excellence-in-literature.com/copywork-how-benjamin-franklin-taught-himself-to-write-well/ (there are at least several articles online discussing this)<br />
<br />
Franklin's technique takes advantage of several learning-related effects:<br />
<br />
* [[Generation effect|Generation]]/[[testing effect]]: without looking at the actual example piece of writing, he attempts to produce an imitation, i.e., he attempts to generate the piece of writing himself.<br />
* [[Spacing effect]]: before trying to produce the imitation, he leaves aside the example piece of writing for some time so that he forgets it to some extent.<br />
<br />
==See also==<br />
<br />
* [[List of terms related to generation]]<br />
<br />
==External links==<br />
<br />
* Duncan Sabien discusses this technique in the context of inventing rationality techniques in [https://www.youtube.com/watch?v=284_dY3_u6w&t=17m37s this EA Global talk]<br />
* [http://www.pathsensitive.com/2018/01/the-benjamin-franklin-method-of-reading.html "The Benjamin Franklin Method of Reading Programming Books"] by James Koppel discusses the general method and gives a short description of how to use it when learning from programming books</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Importance_of_struggling_in_learning&diff=772Importance of struggling in learning2019-05-24T10:48:26Z<p>Issa Rice: /* External links */</p>
<hr />
<div>One axis along which [[explainer]]s (and also [[learner]]s) seem to differ is their degree of belief in the '''importance of struggling in learning'''. Roughly speaking, the "two sides" are:<br />
<br />
* "Don't needlessly struggle" view: a good explanation should be ''easy'' to process, and be as intuitive as possible. [[Aim low]] [http://lesswrong.com/lw/kh/explainers_shoot_high_aim_low/].<br />
* "Struggling is important/essential for understanding" view: A good explanation should be effortful for the [[learner]] to process. It should e.g. present misconceptions and make the learner "do the work".<br />
<br />
The truth might be some sort of mixture. To optimize for struggle would be to e.g. put the [[learner]] in a psychologically stressful state, with little support, to deliberately confuse them, etc., which seems unhelpful. On the other hand, it isn't clear what a totally intuitive explanation would look like. There might even be a "valley of bad intuitiveness", where a small amount of intuitiveness is bad.<br />
<br />
For learners, there is a temptation to [[Learning from multiple sources|search for the "clear explanations"]]. And to some extent this makes sense because some explanations really are awful. But is there a danger in finding and learning from explanations that make a subject "too easy"/deceptively easy?<br />
<br />
From the "How to use" of ''Thinking Physics'' by Lewis Carroll Epstein: "Why torture yourself thinking? Why job? Why do push-ups? [&hellip;] you can't really appreciate the solution until you first appreciate the problem." From "This book": "Let this book, then, be your guide to mental push-ups. Think carefully about the questions and their answers ''before'' you read the answers offered by the author. '''You will find many answers don't turn out as you first expect. Does this mean you have no sense for physics? Not at all. Most questions were deliberately chosen to illustrate those aspects of physics which seem contrary to casual surmise. Revising ideas, even in the privacy of your own mind, is not painless work.'''"<br />
<br />
Adversarial framing:<br />
<br />
* "Don't needlessly struggle" view: Some [[learner]]s may invoke this view as a way to slack off or have an excuse for not understanding something. e.g. a student may say "this explanation is confusing, and I shouldn't have to needlessly struggle" and then give up trying to understand the topic.<br />
* "Struggling is important/essential for understanding" view: Some [[explainer]]s may take this view to cover up their bad explanations. e.g. a teacher may say "being confused is important for learning" to cover up their bad teaching, or to have an excuse for not improving as a teacher.<br />
<br />
One question I have is this: what percentage of the "good kind" of struggling is related to [[List of terms related to generation|trying to generate the answers yourself]] (vs some other kinds of struggling, like being plain confused, or confused about ambiguous phrasing, or whatever)? In other words, obviously generation results in struggling (you actually have to do work, not just passively consume!), but what can we say about the converse? Can we characterize "the good kind of struggling" as a certain type of generation?<br />
<br />
==See also==<br />
<br />
* [[List of terms related to generation]]<br />
* [[Summary table of methods of recall]]<br />
* [[wikipedia:Errorless learning|Errorless learning]]<br />
<br />
==External links==<br />
<br />
* [https://www.youtube.com/watch?v=eVtCO84MDj8 Video that argues for the importance of struggling in learning physics]<br />
* [https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/ Abstraction, intuition, and the “monad tutorial fallacy”]: "What I term the “monad tutorial fallacy,” then, consists in failing to recognize the critical role that struggling through fundamental details plays in the building of intuition. This, I suspect, is also one of the things that separates good teachers from poor ones. If you ever find yourself frustrated and astounded that someone else does not grasp a concept as easily and intuitively as you do, even after you clearly explain your intuition to them (“look, it’s really quite simple,” you say…) then you are suffering from the monad tutorial fallacy."<br />
* [http://wirehead-wannabe.tumblr.com/post/166830719366/the-secret-tedious-math-behind-famous-ppls this post] also discusses something similar (struggling with low-level ideas is important for building up to high-level ones)<br />
* https://en.wikipedia.org/wiki/Desirable_difficulty<br />
* [https://wiki.lesswrong.com/wiki/Meditation Meditation] page on LessWrong Wiki: "Noting your prior reaction to the meditation-prompt is particularly important because conclusions about rationality often sound obvious in retrospect, making it hard for people to visualize the diff between "what I thought before" and "what I thought afterward". Explicitly knowing this difference is important to learning and memory formation."<br />
* Brown et al.'s ''Make It Stick'' has a section called "Undesirable Difficulties" that contrasts desirable vs undesirable difficulties.<br />
* "Before proceeding further, we need to emphasize the importance of the word ''desirable''. Many difficulties are undesirable during instruction and forever after. Desirable difficulties, versus the array of undesirable difficulties, are desirable because they trigger encoding and retrieval processes that support learning, comprehension, and remembering. If, however, the learner does not have the background knowledge or skills to respond to them successfully, they become undesirable difficulties." [https://teaching.yale-nus.edu.sg/wp-content/uploads/sites/25/2016/02/Making-Things-Hard-on-Yourself-but-in-a-Good-Way-2011.pdf (Bjork and Bjork)]<br />
* "I strongly suspect the seeming cognitive ease was masking actually grappling with what is being said in the earlier versions of the statement." [https://www.hedonisticlearning.com/posts/the-pedagogy-of-logic-a-rant.html]<br />
* https://news.ycombinator.com/item?id=18582013<br />
* Hiebert & Grouws. [https://pdfs.semanticscholar.org/3b2e/2aabd07c64bb65408a3891902be4b7277cd6.pdf "The Effects of Classroom Mathematics Teaching on Students' Learning"] 2007. p. 387</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Incubation-based_studying&diff=771Incubation-based studying2019-05-01T18:34:41Z<p>Issa Rice: /* External links */</p>
<hr />
<div>'''Incubation-based studying''' (there might be a better or more standard term) is the idea that one can make more progress on solving a problem/displaying creativity by working on the problem in a concentrated manner, then leaving the problem aside, then coming back to the problem after a break (it is after the break that the problem gets solved).<br />
<br />
The important thing here is that this term should be agnostic about the underlying mechanism (so maybe "incubation" isn't such a good term after all, since it seems to single out the unconscious processing mechanism), or at least there should be a term reserved to refer to the overall phenomenon in a mechanism-agnostic manner. It might be due to "subconscious processing" or it might be due to looking fresh at the problem, or it might be due to something else entirely, or some combination of these things.<br />
<br />
Of course, depending on the mechanism, specific strategies when studying can change. For example, (1) how hard one tries during the initial concentrated session, (2) how long the break is, (3) how many "parallel threads" to have for different problems, and (4) what one does during the break, are all parameters that can be tweaked when one applies this technique, and their optimal values seem to depend on the underlying mechanism. For example, if "subconscious processing" is the underlying mechanism, then presumably one cannot "subconsciously work on" hundreds of problems simultaneously. On the other hand, if the underlying mechanism is that this method gives a "fresh look" at problems, then one might want to attempt as many problems as possible, to "flush the buffer".<br />
<br />
==Notes==<br />
<br />
TODO: process the quotes below into the article.<br />
<br />
In his book ''How to Become a Straight-A Student'' [[Cal Newport]] writes:<br />
<br />
<blockquote>Next, try to solve the problem in the most obvious way possible. This, of course, probably won't work, because most difficult problems are tricky by nature. By failing in this initial approach, however, you will have at least identified what makes this problem hard. Now you are ready to try to come up with a real solution.<br /><br/ >The next step is counterintuitive. After you've primed the problem, put away your notes and move on to something else. Instead of trying to force a solution, think about the problem in between other activities. As you walk across campus, wait in line at the dining hall, or take a shower, bring up the problem in your head and start thinking through solutions. You might even want to go on a quiet hike or long car ride dedicated entirely to mulling over the question at hand.<br /><br />More often than not, after enough mobile consideration, you will finally stumble across a solution. Only then should you schedule more time to go back to the problem set, write it down formally, and work out the kinks. It's unclear exactly ''why'' solving problems is easier when you're on the go, but, whatever the explanation, it has worked for many students. Even better, it saves a lot of time, since most of your thinking has been done in little interludes between other activities, not during big blocks of valuable free time.</blockquote><br />
<br />
In her book ''A Mind for Numbers'', [[Barbara Oakley]] distinguishes between "focused mode" (same thing as [[wikipedia:task-positive network]]?) and "diffuse mode" (same thing as [[wikipedia:task-negative network]]?):<br />
<br />
<blockquote>Focused-mode thinking is essential for studying math and science. It involves a direct approach to solving problems using rational, sequential, analytical approaches. [...] Diffuse-mode thinking is also essential for learning math and science. It allows us to suddenly gain a new insight on a problem we’ve been struggling with and is associated with “big-picture” perspectives. Diffuse-mode thinking is what happens when you relax your attention and just let your mind wander. This relaxation can allow different areas of the brain to hook up and return valuable insights.</blockquote><br />
<br />
In his book ''The Mind Is Flat'', Nick Chater doesn't deny the phenomenon, but disagrees that this sort of incubation-based thinking works due to unconsciously working on the problem:<br />
<br />
<blockquote>Poincaré and Hindemith cannot possibly be right. If they are spending their days actively thinking about other things, their brains are not unobtrusively solving deep mathematical problems or composing complex pieces of music, perhaps over days or weeks, only to reveal the results in a sudden flash. Yet, driven by the intuitive appeal of unconscious thought, psychologists have devoted a great deal of energy in searching for evidence for unconscious mental work. In these studies, they typically give people some tricky problems to solve (e.g. a list of anagrams); after a relatively short period of time, they might instruct participants to continue, to take a break, to do another similar or different mental task, or even get a night’s sleep, before resuming their problems. According to the ‘unconscious work’ perspective, resuming after a break should lead to a sudden improvement in performance, compared with people who just keep going with the task. Studies in this area are numerous and varied, but I think the conclusions are easily summarized. First, the effects of breaks of all kinds are either negligible or non-existent: if unconscious work takes place at all, it is sufficiently ineffectual to be barely detectable, despite a century of hopeful attempts. Second, many researchers have argued that the minor effects of taking a break – and indeed, Poincaré’s and Hindemith’s intuitions – have a much more natural explanation, which involves no unconscious thought at all.<br /><br />The simplest version of the idea comes from thinking about why one gets stuck with a difficult problem in the first place. What is special about such problems is that you can’t solve them through a routine set of steps (in contrast, say, to adding up columns of numbers, which is laborious but routine) – you have to look at the problem in the ‘right way’ before you can make progress (e.g. with an anagram, you might need to focus on a few key letters; in deep mathematics or musical composition, the space of options might be large and varied). So ideally, the right approach would be to fluidly explore the range of possible ‘angles’ on the problem, until hitting on the right one. Yet this is not so easy: once we have been looking at the same problem for a while, we feel ourselves to be stuck or going round in circles. Indeed, the cooperative computational style of the brain makes this difficult to avoid.</blockquote><br />
<br />
http://paulgraham.com/top.html<br />
<br />
==See also==<br />
<br />
* [[Interleaving]]<br />
<br />
==External links==<br />
<br />
* [[wikipedia:Incubation (psychology)]]<br />
* https://www.greaterwrong.com/posts/SEq8bvSXrzF4jcdS8/tips-and-tricks-for-answering-hard-questions<br />
* [[wikipedia:Default mode network]]<br />
* [[wikipedia:Einstellung effect]]<br />
* https://pdfs.semanticscholar.org/76c6/ce0abf6f67f2c089df372261203989b43903.pdf HT [https://www.gwern.net/newsletter/2019/04 gwern]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Scope_for_improvement&diff=770Scope for improvement2019-04-25T09:17:26Z<p>Issa Rice: </p>
<hr />
<div>(There might already be a standard term for this that I don't know about.)<br />
<br />
By '''scope for improvement''' of a skill or learning technique, I mean something like the difference between "how much a person highly skilled in this technique (i.e. a virtuoso user) gets out of it" versus "how much a novice at this technique gets out of it".<br />
<br />
It's sort of related to terms like "high variance" and "heavy-tailed".<br />
<br />
Thinking in terms of scope for improvement encourages looking at techniques in terms of "What can I get out of this if I master this technique?" rather than "How much did the average participant in this study gain out of this technique?".<br />
<br />
There is a distinction between interpersonal skill level differences (how good is the best person at the technique compared to you) and intrapersonal differences (how much can you improve if you practice). The differences are related because interpersonal differences provide evidence for potential intrapersonal differences. Strictly speaking, scope for improvement (in the sense I am thinking about) is only about the intrapersonal difference, or rather the interpersonal average of intrapersonal differences.<br />
<br />
==Relation to idea inoculation==<br />
<br />
Scope for improvement seems related to [[idea inoculation]] and [[inferential distance]]. If the scope for improvement of a technique is not made clear on initial exposure to the technique (e.g. due to a large inferential distance between the [[Explainer|person explaining the technique]] and the [[Learner|person reading about the technique]], or the person explaining the technique not being an expert), then this person can develop a resistance to further attempts to explain the technique (due to idea inoculation).<ref>Duncan A. Sabien. [https://medium.com/@ThingMaker/idea-inoculation-inferential-distance-848836a07a5b "Idea Inoculation + Inferential Distance"]. July 27, 2018. ''Medium''. Retrieved October 12, 2018.</ref><ref>Kaj Sotala. [https://news.ycombinator.com/item?id=10911653 Comment on "The Happiness Code: Cold, hard rationality"]. January 15, 2016. ''Hacker News''. Retrieved October 12, 2018.</ref><ref>Duncan Sabien. [https://www.greaterwrong.com/posts/pjGGqmtqf8vChJ9BR/unofficial-canon-on-applied-rationality/comment/rtmmvSjyDw8EMXRW8 Comment on "Unofficial Canon on Applied Rationality"]. February 15, 2016. ''LessWrong''. Retrieved October 12, 2018.</ref><br />
<br />
==Examples==<br />
<br />
* [[Spaced repetition]] seems like a [[learning technique]] with large scope for improvement. My impression is that most people use it for things like learning foreign language vocabulary and other "isolated facts" they want to memorize, and conclude that there is little scope for improvement. What they don't do is look at the "masters of spaced repetition" and try to cultivate the skill of using a spaced repetition program for understanding. [[Michael Nielsen]]: "I'm particularly grateful to Andy Matuschak for many thoughtful and enjoyable conversations, and especially for pointing out how unusual is the view that Anki can be a virtuoso skill for understanding, not just a means of remembering facts."<ref>Michael A. Nielsen. [http://augmentingcognition.com/ltm.html "Augmenting Long-term Memory"]. July 2018. Retrieved October 12, 2018.</ref><br />
* Probably many mundane skills like the skill of counting has limited scope for improvement. Most people can't learn to count many times faster or many times more accurately or to very large numbers. (At that point, they would probably want to make use of a counting device that just supports an increment operation.)<br />
* Probably a learning technique like highlighting/underlining also has limited scope for improvement.<br />
<br />
==See also==<br />
<br />
==References==<br />
<br />
<references/><br />
<br />
==External links==<br />
<br />
* https://www.lesswrong.com/posts/Nu3wa6npK4Ry66vFp/a-sense-that-more-is-possible</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Self-explanation&diff=769Self-explanation2019-04-14T09:35:35Z<p>Issa Rice: </p>
<hr />
<div>'''Self-explanation''' is a [[learning technique]] where the [[learner]] explains the steps they take in solving a problem or their processing of new information to themselves.<br />
<br />
==History==<br />
<br />
Dunlosky et al. (2013) [http://www.indiana.edu/~pcl/rgoldsto/courses/dunloskyimprovinglearning.pdf] calls a 1983 study by Berry "the seminal study on self-explanation".<br />
<br />
==Software engineering==<br />
<br />
Closely related to self-explanation is a technique called ''rubber duck debugging'' (or ''rubber ducking''), where a programmer explains a software problem to themselves (or someone who knows nothing about programming) to help them debug code.<br />
<br />
==External links==<br />
<br />
* https://siderea.dreamwidth.org/1368412.html</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Learning_through_osmosis&diff=768Learning through osmosis2019-04-03T04:17:37Z<p>Issa Rice: /* Notes */</p>
<hr />
<div>'''Learning through osmosis''' (also called '''learning by osmosis''', '''learning via osmosis''', and '''osmotic learning''') is the idea that one can learn things through a mysterious/not-well-understood method where one immerses oneself in some environment.<br />
<br />
==Notes==<br />
<br />
Is this how people learn their native language?<br />
<br />
In math:<br />
<br />
<blockquote>Here's a phenomenon I was surprised to find: you'll go to talks, and hear various words, whose definitions you're not so sure about. At some point you'll be able to make a sentence using those words; you won't know what the words mean, but you'll know the sentence is correct. You'll also be able to ask a question using those words. You still won't know what the words mean, but you'll know the question is interesting, and you'll want to know the answer. Then later on, you'll learn what the words mean more precisely, and your sense of how they fit together will make that learning much easier.<ref>Ravi Vakil. [http://math.stanford.edu/~vakil/potentialstudents.html "For potential Ph.D. students"].</ref></blockquote><br />
<br />
https://www.greaterwrong.com/search?q=osmosis<br />
<br />
https://www.greaterwrong.com/posts/zLZDxXbcXP3hdM3sh/osmosis-learning-a-crucial-consideration-for-the-craft<br />
<br />
https://www.greaterwrong.com/posts/9SaAyq7F7MAuzAWNN/teaching-the-unteachable<br />
<br />
https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/#comment-1782<br />
<br />
==See also==<br />
<br />
* [[Importance of struggling in learning]]<br />
<br />
==References==<br />
<br />
<references/></div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=759Teaching for understanding versus teaching for creation2019-02-20T23:09:40Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) (note: this does ''not'' mean that the ''act of teaching itself'' has been systematized; linear algebra is systematized even if people have not figured out how to teach it) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative example]]s are available || Positive examples are hard to convey, while negative examples are available<br />
|}<br />
<br />
The meta levels are somewhat confusing, so let me try listing them:<br />
<br />
# object level (linear algebra): this is what a typical student taking a linear algebra course does<br />
# (how to invent linear algebra): this is what the people who invented linear algebra did, or what a highly-above-average student taking a linear algebra course might do, if they were trying to really understand the subject<br />
# (how to teach linear algebra): this is what a graduate student figuring out how to teach a linear algebra course does<br />
# (how to teach how to invent linear algebra): this is what Jeffreyssai (i.e. someone who wants to teach his students how to invent) must figure out<ref>https://wiki.lesswrong.com/wiki/Beisutsukai</ref><br />
# (how to invent how to teach linear algebra): what an unusual instructor of linear algebra does, if they want to figure out how to best teach linear algebra<br />
<br />
==Differential teaching strategies==<br />
<br />
Why does the "teaching for understanding" vs "teaching for creation" distinction matter? One reason is that depending on the audience/goal, it makes sense to alter the teaching strategy.<br />
<br />
For example, if the goal is to create, it makes sense to prove as many theorems as possible without looking at the proofs in the book. It might make sense (after empirical investigation) to also do this even if the goal is just to understand the material (see [[pre-testing effect]]).<br />
<br />
==References==<br />
<br />
<references/></div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=758Teaching for understanding versus teaching for creation2019-02-20T23:05:49Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) (note: this does ''not'' mean that the ''act of teaching itself'' has been systematized; linear algebra is systematized even if people have not figured out how to teach it) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative example]]s are available || Positive examples are hard to convey, while negative examples are available<br />
|}<br />
<br />
The meta levels are somewhat confusing, so let me try listing them:<br />
<br />
# object level (linear algebra): this is what a typical student taking a linear algebra course does<br />
# (how to invent linear algebra): this is what the people who invented linear algebra did, or what a highly-above-average student taking a linear algebra course might do, if they were trying to really understand the subject<br />
# (how to teach linear algebra): this is what a graduate student figuring out how to teach a linear algebra course does<br />
# (how to teach how to invent linear algebra): this is what Jeffreyssai (i.e. someone who wants to teach his students how to invent) must figure out<ref>https://wiki.lesswrong.com/wiki/Beisutsukai</ref><br />
# (how to invent how to teach linear algebra): what an unusual instructor of linear algebra does, if they want to figure out how to best teach linear algebra<br />
<br />
==References==<br />
<br />
<references/></div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=757Teaching for understanding versus teaching for creation2019-02-20T23:05:31Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) (note: this does ''not'' mean that the ''act of teaching itself'' has been systematized; linear algebra is systematized even if people have not figured out how to teach it) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative example]]s are available || Positive examples are hard to convey, while negative examples are available<br />
|}<br />
<br />
The meta levels are somewhat confusing, so let me try listing them:<br />
<br />
# object level (linear algebra): this is what a typical student taking a linear algebra course does<br />
# (how to invent linear algebra): this is what the people who invented linear algebra did, or what a highly-above-average student taking a linear algebra course might do, if they were trying to really understand the subject<br />
# (how to teach linear algebra): this is what a graduate student figuring out how to teach a linear algebra course does<br />
# (how to teach how to invent linear algebra): this is what Jeffreyssai (i.e. someone who wants to teach his students how to invent) must figure out<ref>https://wiki.lesswrong.com/wiki/Beisutsukai</ref><br />
# (how to invent how to teach linear algebra): what an unusual instructor of linear algebra does, if they want to figure out how to best teach linear algebra</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=756Teaching for understanding versus teaching for creation2019-02-20T22:59:21Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) (note: this does ''not'' mean that the ''act of teaching itself'' has been systematized; linear algebra is systematized even if people have not figured out how to teach it) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative example]]s are available || Positive examples are hard to convey, while negative examples are available<br />
|}<br />
<br />
The meta levels are somewhat confusing, so let me try listing them:<br />
<br />
# object level (linear algebra): this is what a typical student taking a linear algebra course does<br />
# (how to invent linear algebra): this is what the people who invented linear algebra did, or what a highly-above-average student taking a linear algebra course might do, if they were trying to really understand the subject<br />
# (how to teach linear algebra): this is what a graduate student figuring out how to teach a linear algebra course does<br />
# (how to teach how to invent linear algebra)<br />
# (how to invent how to teach linear algebra)</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=755Teaching for understanding versus teaching for creation2019-02-20T22:52:46Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) (note: this does ''not'' mean that the ''act of teaching itself'' has been systematized; linear algebra is systematized even if people have not figured out how to teach it) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative example]]s are available || Positive examples are hard to convey, while negative examples are available<br />
|}</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=754Teaching for understanding versus teaching for creation2019-02-20T22:51:02Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative example]]s are available || Positive examples are hard to convey, while negative examples are available<br />
|}</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=753Teaching for understanding versus teaching for creation2019-02-20T22:50:40Z<p>Issa Rice: </p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|-<br />
| Both [[positive and negative examples]] are available || Positive examples are hard to convey, while negative examples are available<br />
|}</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Teaching_for_understanding_versus_teaching_for_creation&diff=752Teaching for understanding versus teaching for creation2019-02-20T22:48:38Z<p>Issa Rice: Created page with "(there might be a more standard term for this distinction) '''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a learne..."</p>
<hr />
<div>(there might be a more standard term for this distinction)<br />
<br />
'''Teaching for understanding versus teaching for creation''' refers to the distinction between teaching a [[learner]] to simply understand the material (which allows them to use the material in simple applications) versus teaching the learner to create new ideas in the subject.<br />
<br />
Here is a rough categorization (not necessarily very accurate):<br />
<br />
{| class="wikitable"<br />
|-<br />
! Teaching for understanding !! Teaching for creation<br />
|-<br />
| Undergraduate curriculum (teaches standard topics in a field) || Graduate school (is supposed to teach students to advance the field)<br />
|-<br />
| Teaching the object-level skill/material || Teaching a meta-level skill (note: there is more than one way to "go meta" from the object level, e.g. one could also "go meta" by learning about how to learn, rather than learning how to create)<br />
|-<br />
| Teaching of material that has been systematized (e.g. linear algebra has been systematized and is well-understood) || Teaching of material/skills that have not been systematized (e.g. the act of inventing linear algebra from scratch has ''not'' been systematized, and is not well-understood)<br />
|}</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=751List of problems in mathematical notation2019-02-19T04:14:30Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability. 0 in linear algebra to mean the scalar 0 as well as the 0 vector of any dimension.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law, "same object denotes different things" || in probability, where <math>\Pr(X=x)</math> is abbreviated <math>\Pr(x)</math>. Then if we take something like <math>x:=3</math>, we would have <math>\Pr(3)</math>, but now we have lost the information about what random variable we are talking about.<br />
|-<br />
| "different objects denoted by the same thing" || see e.g. [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl's_belief_propagation_notation here], where <math>M_{y\,|\,x}</math> can mean two different things on each side of an equation.<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added, which can sometimes [[wikipedia:Riemann series theorem|matter]]) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|-<br />
| Ambiguous order of operations || e.g. <math>a/bc</math><br />
|-<br />
| Undefined operations || e.g. I often see in mathematical logic things like <math>\Gamma \cup \phi</math> to mean <math>\Gamma \cup \{\phi\}</math>, where <math>\Gamma</math> is a set of formulas and <math>\phi</math> is a formula.<br />
|-<br />
| strange reorderings based on context || e.g. https://machinelearning.subwiki.org/wiki/User:IssaRice/Minus_notation_in_game_theory<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=750List of problems in mathematical notation2019-02-19T04:11:07Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability. 0 in linear algebra to mean the scalar 0 as well as the 0 vector of any dimension.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law, "same object denotes different things" || in probability, where <math>\Pr(X=x)</math> is abbreviated <math>\Pr(x)</math>. Then if we take something like <math>x:=3</math>, we would have <math>\Pr(3)</math>, but now we have lost the information about what random variable we are talking about.<br />
|-<br />
| "different objects denoted by the same thing" || see e.g. [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl's_belief_propagation_notation here], where <math>M_{y\,|\,x}</math> can mean two different things on each side of an equation.<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added, which can sometimes [[wikipedia:Riemann series theorem|matter]]) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|-<br />
| Ambiguous order of operations || e.g. <math>a/bc</math><br />
|-<br />
| Undefined operations || e.g. I often see in mathematical logic things like <math>\Gamma \cup \phi</math> to mean <math>\Gamma \cup \{\phi\}</math>, where <math>\Gamma</math> is a set of formulas and <math>\phi</math> is a formula.<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=749List of problems in mathematical notation2019-02-19T04:08:00Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability. 0 in linear algebra to mean the scalar 0 as well as the 0 vector of any dimension.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law || in probability, where <math>\Pr(X=x)</math> is abbreviated <math>\Pr(x)</math>. Then if we take something like <math>x:=3</math>, we would have <math>\Pr(3)</math>, but now we have lost the information about what random variable we are talking about.<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added, which can sometimes [[wikipedia:Riemann series theorem|matter]]) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|-<br />
| Ambiguous order of operations || e.g. <math>a/bc</math><br />
|-<br />
| Undefined operations || e.g. I often see in mathematical logic things like <math>\Gamma \cup \phi</math> to mean <math>\Gamma \cup \{\phi\}</math>, where <math>\Gamma</math> is a set of formulas and <math>\phi</math> is a formula.<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=748List of problems in mathematical notation2019-02-19T04:04:57Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability. 0 in linear algebra to mean the scalar 0 as well as the 0 vector of any dimension.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added, which can sometimes [[wikipedia:Riemann series theorem|matter]]) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|-<br />
| Ambiguous order of operations || e.g. <math>a/bc</math><br />
|-<br />
| Undefined operations || e.g. I often see in mathematical logic things like <math>\Gamma \cup \phi</math> to mean <math>\Gamma \cup \{\phi\}</math>, where <math>\Gamma</math> is a set of formulas and <math>\phi</math> is a formula.<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=747List of problems in mathematical notation2019-02-19T04:01:32Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added, which can sometimes [[wikipedia:Riemann series theorem|matter]]) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|-<br />
| Ambiguous order of operations || e.g. <math>a/bc</math><br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=746List of problems in mathematical notation2019-02-19T03:52:59Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added, which can sometimes [[wikipedia:Riemann series theorem|matter]]) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=745List of problems in mathematical notation2019-02-19T03:51:51Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index || Writing things like <math>\sum_i f(x_i)</math> (unclear what set the index ranges over, or the order in which the terms are added) or even just <math>\sum f(x_i)</math> (unclear which variable is the index).<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=744List of problems in mathematical notation2019-02-19T03:44:35Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation || This happens when the same symbol is used for different purposes. For instance, parentheses are used to group expressions, for writing tuples, for function calls, in superscripts or subscripts with various meanings (<math>f^{(i)}</math> for the <math>i</math>th derivative or the <math>i</math>th function in some sequence), and so forth. The equals sign is often used for equality both in the object language and the metalanguage in mathematical logic, and used as part of the expression <math>X=x</math> to define an event in probability.<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=743List of problems in mathematical notation2019-02-19T03:40:13Z<p>Issa Rice: </p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index<br />
|}<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=List_of_problems_in_mathematical_notation&diff=742List of problems in mathematical notation2019-02-19T03:39:55Z<p>Issa Rice: Created page with "This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conv..."</p>
<hr />
<div>This page gives a '''list of problems in mathematical notation'''. The list focuses on problems with the symbolism of mathematics rather than other communication problems/conventions (such as using "if" to mean "iff" in definitions, not introducing variables, etc.).<br />
<br />
{| class="sortable wikitable"<br />
|-<br />
! Name !! Description !! How to avoid<br />
|-<br />
| Overloading/abuse of notation<br />
|-<br />
| Type error<br />
|-<br />
| Failure of Leibniz law<br />
|-<br />
| Omission of index<br />
|}</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=741Examples in mathematics2019-02-19T03:23:11Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have a different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
There is a tendency in human thinking to leave ideas merely at the verbal level, i.e. at a level where the ideas don't constrain anticipation.<ref>https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation</ref> Giving surprising examples and non-examples is one way to catch people's fuzzy thinking and to correct them.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example.<br />
| A surprising non-example. False positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. False negatives, also known as type II errors.<br />
| An obvious non-example.<br />
|}<br />
<br />
===Obvious examples===<br />
<br />
An "obvious" example, or central example.<br />
<br />
Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
<br />
===Surprising non-examples===<br />
<br />
A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br />
<br />
As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br />
<br />
The examples in this cell are false positives, also known as type I errors.<br />
<br />
===Surprising examples===<br />
<br />
A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br />
<br />
Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br />
<br />
As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
<br />
The examples in this cell are false negatives, also known as type II errors.<br />
<br />
===Obvious non-examples===<br />
<br />
An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
==References==<br />
<br />
<references/><br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=740Examples in mathematics2019-02-19T03:21:51Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have a different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
There is a tendency in human thinking to leave ideas merely at the verbal level, i.e. at a level where the ideas don't constrain anticipation.<ref>https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation</ref> Giving surprising examples and non-examples is one way to catch people's fuzzy thinking and to correct them.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example.<br />
| A surprising non-example.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example.<br />
| An obvious non-example.<br />
|}<br />
<br />
===Obvious examples===<br />
<br />
An "obvious" example, or central example.<br />
<br />
Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
<br />
===Surprising non-examples===<br />
<br />
A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br />
<br />
As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br />
<br />
The examples in this cell are false positives, also known as type I errors.<br />
<br />
===Surprising examples===<br />
<br />
A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br />
<br />
Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br />
<br />
As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
<br />
The examples in this cell are false negatives, also known as type II errors.<br />
<br />
===Obvious non-examples===<br />
<br />
An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
==References==<br />
<br />
<references/><br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=739Examples in mathematics2019-02-19T03:21:40Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have a different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
There is a tendency in human thinking to leave ideas merely at the verbal level, i.e. at a level where the ideas don't constrain anticipation.<ref>https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation</ref> Giving surprising examples and non-examples is one way to catch people's fuzzy thinking and to correct them.<br />
<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example.<br />
| A surprising non-example.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example.<br />
| An obvious non-example.<br />
|}<br />
<br />
===Obvious examples===<br />
<br />
An "obvious" example, or central example.<br />
<br />
Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
<br />
===Surprising non-examples===<br />
<br />
A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br />
<br />
As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br />
<br />
The examples in this cell are false positives, also known as type I errors.<br />
<br />
===Surprising examples===<br />
<br />
A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br />
<br />
Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br />
<br />
As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
<br />
The examples in this cell are false negatives, also known as type II errors.<br />
<br />
===Obvious non-examples===<br />
<br />
An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
==References==<br />
<br />
<references/><br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=738Examples in mathematics2019-02-19T03:13:19Z<p>Issa Rice: </p>
<hr />
<div>'''Examples in mathematics''' have a different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
There is a tendency in human thinking to leave ideas merely at the verbal level, i.e. at a level where the ideas don't constrain anticipation.<ref>https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation</ref> Giving surprising examples and non-examples is one way to catch people's fuzzy thinking and to correct them.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
==References==<br />
<br />
<references/><br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=737Examples in mathematics2019-02-19T03:12:40Z<p>Issa Rice: </p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
There is a tendency in human thinking to leave ideas merely at the verbal level, i.e. at a level where the ideas don't constrain anticipation.<ref>https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation</ref> Giving surprising examples and non-examples is one way to catch people's fuzzy thinking and to correct them.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
==References==<br />
<br />
<references/><br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=736Examples in mathematics2019-02-19T03:12:22Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
There is a tendency in human thinking to leave ideas merely at the verbal level, i.e. at a level where the ideas don't constrain anticipation.<ref>https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation</ref> Giving surprising examples and non-examples is one way to catch people's fuzzy thinking and to correct them.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=735Examples in mathematics2019-02-19T03:08:16Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing [[wikipedia:Unit testing|unit tests]] in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=734Examples in mathematics2019-02-19T03:07:55Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
One can view the giving of examples as analogous to writing unit tests in programming. It is good to have some obvious examples, but one also wants to test the software on surprising cases (called "edge cases") to make sure the software really works.<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=733Examples in mathematics2019-02-19T03:04:52Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
In giving examples, it is particularly important to give examples in the places where intuition and the formal definition disagree. By default, the [[learner]] may have a tendency to [[wikipedia:Peter Cathcart Wason#Wason and the 2-4-6 Task|search only for positive examples]].<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=732Examples in mathematics2019-02-19T02:58:21Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br>The examples in this cell are false positives, also known as type I errors.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br>The examples in this cell are false negatives, also known as type II errors.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=731Examples in mathematics2019-02-19T02:54:49Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
| An obvious non-example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = x/0</math>. This does ''not'' define a function because division by zero is undefined. Someone familiar with division by zero would recognize this, and correctly reject this example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=730Examples in mathematics2019-02-19T02:50:56Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see the next cell in this table) might conflate this example with it, and think that this is a function.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=729Examples in mathematics2019-02-19T02:49:53Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator). Someone who thought that a function is "something that is defined by a formula" might mistakenly think "this thing is defined by a formula, so must be a function".<br>As another example, let <math>f : A \to \emptyset</math> be a function where <math>A \ne \emptyset</math>. This does ''not'' define a function. To see this, note that since <math>A\ne \emptyset</math>, we must have some <math>a \in A</math>. By the definition of function, we would have <math>f(a) \in \emptyset</math>, which is a contradiction since <math>\emptyset</math> is empty. Someone who was familiar with the empty function (see "Is not an example according to intuition" cell in this table) might conflate this example with it, and think that this is a function.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=728Examples in mathematics2019-02-19T02:44:53Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. Let <math>f : \mathbf Q \to \mathbf Z</math> be defined by <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction). This does ''not'' define a function. To see this, note that <math>f(1/2) = 1</math> and <math>f(3/6) = 3</math>. But <math>1/2=3/6</math> so we must have <math>f(1/2)=f(3/6)</math> (a function must output a unique object for any given object), but <math>1\ne3</math>, so something has gone wrong. It turns out that each fraction has many different representations, and the idea of taking "the" numerator does not make sense, unless we constrain the representation somehow (e.g. by reducing the fraction and always putting any minus sign in the numerator).<br> a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=727Examples in mathematics2019-02-19T02:40:32Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function. In this example, it is not the intuitive notion of "function" that is getting in the way, but rather, a different technical concept (i.e., that of a computable function) that is getting in the way.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=726Examples in mathematics2019-02-19T02:39:07Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>. This does define a function, although the function is not ''computable''. Someone familiar with the halting problem might substitute "is a well-defined function" with "is a computable function" and say that this is not a function.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=725Examples in mathematics2019-02-19T02:36:51Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N \to \{\text{true}, \text{false}\}</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=724Examples in mathematics2019-02-19T02:36:24Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br>As a third example, let <math>\mathcal M</math> be the set of all Turing machines, and let <math>f : \mathcal M \times \mathbf N</math> be defined by <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math>.<br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=723Examples in mathematics2019-02-19T02:35:19Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions (or someone who isn't used to working with the empty set or vacuous conditions) might dismiss this example.<br><math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math><br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=722Examples in mathematics2019-02-19T02:33:37Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. Let <math>f : \mathbf N \to \mathbf N</math> be defined by <math>f(n) = n\text{th digit of }\pi</math>. This does define a function, but someone who thought that a function is "something that is defined by a formula" wouldn't think it is a function.<br>Another example is the empty function <math>f : \emptyset \to A</math> for any set <math>A</math>. This does define a function, but the function doesn't "do" anything. Since it is an "extreme" example of a function, someone who was only used to dealing with "normal-looking" functions might dismiss this example<br><math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math><br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Ricehttps://learning.subwiki.org/w/index.php?title=Examples_in_mathematics&diff=721Examples in mathematics2019-02-19T02:30:18Z<p>Issa Rice: /* Unit testing and examples */</p>
<hr />
<div>'''Examples in mathematics''' have different flavor than examples in other disciplines. This is probably because [[definitions in mathematics]] are different from definitions in other disciplines (mathematical definitions are exact). Some [https://www.readthesequences.com/The-Cluster-Structure-Of-Thingspace common] [https://wiki.lesswrong.com/wiki/How_an_algorithm_feels problems] of deciding whether something is or is not an example do not appear in mathematics. Instead, there are other problems.<br />
<br />
==Unit testing and examples==<br />
<br />
A common problem in math is that one comes in with some preconceived idea of what an object should "look like" which is different from what the definition says. In other words, there is a mismatch between one's intuitive notion and the definition.<br />
<br />
Take the example of a definition of function. A function is some object that takes each object in some set to a unique object in another set. Someone who was not familiar with the formal definition might mistakenly think of a function as "something that is defined by a formula".<br />
<br />
{| class="wikitable"<br />
|-<br />
!<br />
! Is an example according to definition<br />
! Is not an example according to definition<br />
|-<br />
! Is an example according to intuition<br />
| An "obvious" example, or central example. Let <math>f : \mathbf R \to \mathbf R</math> be defined by <math>f(x) = 2x^2 - 3x + 5</math>. This does define a function, and someone who thought that a function is "something that is defined by a formula" would think that this is a function.<br />
| A surprising non-example. <math>f(a/b) = a</math> (i.e. a function that outputs the numerator of a fraction); a function <math>f : A \to \emptyset</math> where <math>A \ne \emptyset</math>.<br />
|-<br />
! Is not an example according to intuition<br />
| A surprising example. <math>f(n) = n\text{th digit of }\pi</math>; the empty function; <math>f(M,n) = \text{Turing machine }M\text{ halts on input }n</math><br />
| An obvious non-example.<br />
|}<br />
<br />
==Hierarchical nature of examples==<br />
<br />
Something can be considered "concrete" or "abstract" depending on the context. Consider a term like "metric space". One can give examples of metric spaces. On the other hand, a metric space is itself an example (of a structured space, of a topological space).<br />
<br />
[[Category:Mathematics]]</div>Issa Rice