194 Кб, 600x600
Черновик для Latex формул
(a) $ \forall x (Px)$ где P некий предикат принимающий один аргумент
(b) $ \forall x (Qx)$ где Q тоже предикат
(c) $ \exists y (Py \land Qy)$
(a) $ \forallx (Px)$ где P некий предикат принимающий один аргумент
(b) $ \forallx (Qx)$ где Q тоже предикат
(c) $ \existsy (Py \land Qy)$
(b) $ \forall x (Qx)$ где Q тоже предикат
(c) $ \exists y (Py \land Qy)$
(a) $ \forallx (Px)$ где P некий предикат принимающий один аргумент
(b) $ \forallx (Qx)$ где Q тоже предикат
(c) $ \existsy (Py \land Qy)$
∀x(¬Px)
$\exists x (Ax \land Bx )$
$\exists x (Ax \rightarrow Bx)$
$\exists x (Ax \rightarrow Bx)$
$\forall \varepsilon \geq 0 \exist \delta \geq 0 : |f(x) - a| \leq \varepsilon$
Хотите лайфак?
Заходите на любой вопрос на math.stackexchange.com, там внизу есть поле Your Answer. Не нужно быть даже залогиненным, оно там есть. В это поле записываете свою формулу Латех, она под этим полем сразу отображается в обработанном виде. Можно её там править, пока не приобретёт нужный вид.
Единственное, следует избегать значка $\ast$, вместо него писать \ast
Наверняка, есть другие онлайн редакторы для формул, но math.stackexchange.com все знают, регулярно туда заходят
Заходите на любой вопрос на math.stackexchange.com, там внизу есть поле Your Answer. Не нужно быть даже залогиненным, оно там есть. В это поле записываете свою формулу Латех, она под этим полем сразу отображается в обработанном виде. Можно её там править, пока не приобретёт нужный вид.
Единственное, следует избегать значка $\ast$, вместо него писать \ast
Наверняка, есть другие онлайн редакторы для формул, но math.stackexchange.com все знают, регулярно туда заходят
>>85956
ну тогда уж overleaf.com - проще и приятнее + нету тараканов специфических для встроенного в сайты теха
ну тогда уж overleaf.com - проще и приятнее + нету тараканов специфических для встроенного в сайты теха
$\exists x ( Nx \to Ix \to \neg \forall y (Nx \to Ix \to \neg x < y) )$
$$\xymatrix{X\ar[rd]_f\ar[rr]^{g}&&{X_k}\ar[ld]^{f_k}\\&Y}$$
$\langle vec{x},vec{y} \rangle = V^*$
$u_1,...,u_n,u_1\',...,u_n\'$
$u_1...u_n$ и $u_1\'...u_n\'$
$u_1...u_n$ и $u_1\'...u_n\'$
291 Кб, 900x365
$u_1',...,u_n'$
$\log_ab$
$xyz$
$y' = f(x,y)$
\[ f(n) =
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
\]
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
\]
[ f(n) =
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
]
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
]
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
f(n) =
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
f(n) =
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
Бла-бла-бла
\[ f(n) =
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
\]
бла-бла-бла
\[ f(n) =
\begin{cases}
n/2 & \quad \text{if } n \text{ is even}\\
-(n+1)/2 & \quad \text{if } n \text{ is odd}
\end{cases}
\]
бла-бла-бла
\[ f(n) =
\begin{cases}
x
y
\end{cases}
\]
\begin{cases}
x
y
\end{cases}
\]
\begin{cases}
n/2 \\
-(n+1)/2
\end{cases}
n/2 \\
-(n+1)/2
\end{cases}
>>85801 (OP)
[math]$\xymatrix{A \ar[r]^-f \ar[d]^\varphi&B\ar[d]^g\\C\ar[r]^\Psi&D}$[/math]
[math]$\xymatrix{A \ar[r]^-f \ar[d]^\varphi&B\ar[d]^g\\C\ar[r]^\Psi&D}$[/math]
$y' = \frac{y}{x}$
$x = 0$
[math]x = 1[/math]
$ x = 2 $
\begin{cases}
r_1 \\
r_2
\end{cases}
r_1 \\
r_2
\end{cases}
[math] (\forall n\in{\mathbb N})\Big((\forall i\in\{1;\dots;n-1\})P_i\Rightarrow P_n\Big)\Rightarrow(\forall n\in{\mathbb N})P_n [\math]
>>61088
Я хотел написать пособие, которое помогло бы людям, далёким от математики, понять её, даже если нет никаких знаний о ней. Хотел показать, что за набором непонятных символов аля
[math] (\forall n\in{\mathbb N})\Big((\forall i\in\{1;\dots;n-1\})P_i\Rightarrow P_n\Big)\Rightarrow(\forall n\in{\mathbb N})P_n [/math]
>>90343
У меня тоже подобные идеи были.
У меня тоже подобные идеи были.
>>90378
$ A^2_3 $
$ A^2_3 $
>>85801 (OP)
$\prod_{i \leq m} p(i)^(k_i)$
$\prod_{i \leq m} p(i)^(k_i)$
$\prod_{i \leq m} p(i)^k_i$
$\prod_{i \leq m} p(i)^{k_i}$
$ \bigcup_\alpha(a_\alpha;+\infty)=(inf\:a_\alpha;+\infty) $
$H(p1, p2, ...) = -\sum_{n=1}^{\infty}p_n log(p_n)$
\begin{align}
\text{mathbf} \; & \mathbf{abcdefgh} \; \mathbf{ABCDEFGH} \; \mathbf{\alpha \beta \gamma \delta}
\\
\text{boldsymbol} \; & \boldsymbol{abcdefgh} \; \boldsymbol{ABCDEFGH} \; \boldsymbol{\alpha \beta \gamma \delta}
\\
\text{bm} \; & \bm{abcdefgh} \; \bm{ABCDEFGH} \; \bm{\alpha \beta \gamma \delta}
\end{align}
\text{mathbf} \; & \mathbf{abcdefgh} \; \mathbf{ABCDEFGH} \; \mathbf{\alpha \beta \gamma \delta}
\\
\text{boldsymbol} \; & \boldsymbol{abcdefgh} \; \boldsymbol{ABCDEFGH} \; \boldsymbol{\alpha \beta \gamma \delta}
\\
\text{bm} \; & \bm{abcdefgh} \; \bm{ABCDEFGH} \; \bm{\alpha \beta \gamma \delta}
\end{align}
$f(x) \nrightarrow k(x)$
$C$
>>90921
Только помни, что это не символ функции (абсолютные синонимы: мэппинга, отображения).
Может, тебе нужен mapsto?
$f: x \mapsto y \:\: \Leftrightarrow \:\: y = f(x)$
Только помни, что это не символ функции (абсолютные синонимы: мэппинга, отображения).
Может, тебе нужен mapsto?
$f: x \mapsto y \:\: \Leftrightarrow \:\: y = f(x)$
>>90926
Не, ты не понял. $f(x)$ и $g(x)$ это две функции в равенстве, а не область значений функции и область определения функции. $ \rightarrow$ показывает тождественное преобразование
Не, ты не понял. $f(x)$ и $g(x)$ это две функции в равенстве, а не область значений функции и область определения функции. $ \rightarrow$ показывает тождественное преобразование
$
\begin{equation}
\frac{4(x-3)}{x-3} = 6
\end{equation}
$
\begin{equation}
\frac{4(x-3)}{x-3} = 6
\end{equation}
$
abobas abobs abob
$\begin{equation} \frac{4(x-3)}{x-3} = 6 \end{equation} $
abo bobas bobbos
$\begin{equation} \frac{4(x-3)}{x-3} = 6 \end{equation} $
abo bobas bobbos
abobas abobs abob
$\frac{4(x-3)}{x-3} = 6$
abo bobas bobbos
$\frac{4(x-3)}{x-3} = 6$
abo bobas bobbos
$2y(y')^3+y''=0$
\[0, 1, 2, 3, 4, 5, 6, 7, 8\]
3\sqrt{3}i ?
3\sqrt{3}i ?
\[3\sqrt{3}i\]
$ \overline{\langle u,v \rangle}$
\begin{gather}
\displaystyle\underset{ \scriptscriptstyle 3×3 }{[ \hspace{.2ex}\mathcal{A}\hspace{.2ex} ]} \hspace{.1ex}
= \hspace{-0.2ex} \ldots
\end{gather}
Matrix ${[ hspace{.2ex}\mathcal{A}\hspace{.2ex} ]}$ is a~3×3 matrix, because it has 3 rows and 3 columns.
Matrix ${[ hspace{.2ex}\mathcal{B}\hspace{.2ex} ]}$ has 2 rows and 4 columns, so it’s dimension is 2×4.
Matrix ${[ hspace{.2ex}\mathcal{C}\hspace{.2ex} ]}$ is a~column matrix with just one column, and its dimension is 3×1.
And ${[ hspace{.2ex}\mathcal{D}\hspace{.2ex} ]}$ is a~row matrix with dimension 1×6.
\displaystyle\underset{ \scriptscriptstyle 3×3 }{[ \hspace{.2ex}\mathcal{A}\hspace{.2ex} ]} \hspace{.1ex}
= \hspace{-0.2ex} \ldots
\end{gather}
Matrix ${[ hspace{.2ex}\mathcal{A}\hspace{.2ex} ]}$ is a~3×3 matrix, because it has 3 rows and 3 columns.
Matrix ${[ hspace{.2ex}\mathcal{B}\hspace{.2ex} ]}$ has 2 rows and 4 columns, so it’s dimension is 2×4.
Matrix ${[ hspace{.2ex}\mathcal{C}\hspace{.2ex} ]}$ is a~column matrix with just one column, and its dimension is 3×1.
And ${[ hspace{.2ex}\mathcal{D}\hspace{.2ex} ]}$ is a~row matrix with dimension 1×6.
Matrix ${[ \hspace{.3ex}\mathrm{A}\hspace{.3ex} ]}$ is a~$3×3$ matrix, because it has 3 rows and 3 columns.
Matrix ${[ \hspace{.3ex}\mathrm{B}\hspace{.3ex} ]}$ has 2 rows and 4 columns, so its dimension is $2×4$.
Matrix ${[ \hspace{.3ex}\mathrm{C}\hspace{.3ex} ]}$ is a~column matrix (that is a~matrix with just one column), and its dimension is $3×1$.
And ${[ \hspace{.3ex}\mathrm{D}\hspace{.3ex} ]}$ is a~row matrix with dimension $1×6$.
Matrix ${[ \hspace{.3ex}\mathrm{B}\hspace{.3ex} ]}$ has 2 rows and 4 columns, so its dimension is $2×4$.
Matrix ${[ \hspace{.3ex}\mathrm{C}\hspace{.3ex} ]}$ is a~column matrix (that is a~matrix with just one column), and its dimension is $3×1$.
And ${[ \hspace{.3ex}\mathrm{D}\hspace{.3ex} ]}$ is a~row matrix with dimension $1×6$.
${[ \hspace{.3ex}\mathrm{A}\hspace{.3ex} ]}$ is a~${3{×}3}$ matrix
$\langle x y \rangle = \overline{\langle y x \rangle} $
$\forall \varepsilon > 0 \exists n>N (x_n-A < \varepsilon)$
При адекватной формализации языка первого порядка у тебя такой проблемы не будет. Так как набор символов переменных будет выбран естественно и естественность в частности подразумевает разрешимость этого множества.
В абстрактной формулировке тебе правильно указывают на проблему остановки. По произвольной программе C можно построить программу $C^$ которая запускает C, выводит символ $x_{n+1}$ на n-ом шаге работы C и если $C$ завершила работу, то $С^$ продолжает свою работу сначала выводя $x_0$ а затем в бесконечном цикле все оставшиеся $x_i$. Легко видеть, что $C^$ всегда является валидным алгоритмом перечисляющим счетное число символов и $C^$ перечисляет $x_0$ если и только если $C$ останавливается. Таким $(\cdot)^*$ это сведение проблемы к частному случаю твоей проблемы для проверки, выводит ли $A$ символ $x_0$.
В абстрактной формулировке тебе правильно указывают на проблему остановки. По произвольной программе C можно построить программу $C^$ которая запускает C, выводит символ $x_{n+1}$ на n-ом шаге работы C и если $C$ завершила работу, то $С^$ продолжает свою работу сначала выводя $x_0$ а затем в бесконечном цикле все оставшиеся $x_i$. Легко видеть, что $C^$ всегда является валидным алгоритмом перечисляющим счетное число символов и $C^$ перечисляет $x_0$ если и только если $C$ останавливается. Таким $(\cdot)^*$ это сведение проблемы к частному случаю твоей проблемы для проверки, выводит ли $A$ символ $x_0$.
\begin{gather}
\| \boldsymbol{a} \| \| \boldsymbol{b} \| \operatorname{cosine} \measuredangle \Bigl( \boldsymbol{a} \widehat{\;\;} \boldsymbol{b} \Bigr)
\\
\| \boldsymbol{a} \| \| \boldsymbol{b} \| \operatorname{cosine} \measuredangle \Bigl( \boldsymbol{a} \widehat{\phantom{w}} \boldsymbol{b} \Bigr)
\\
\| \boldsymbol{a} \| \| \boldsymbol{b} \| \operatorname{cosine} \measuredangle \Bigl( \boldsymbol{a} \widehat{\phantom{W}} \boldsymbol{b} \Bigr)
\end{gather}
\| \boldsymbol{a} \| \| \boldsymbol{b} \| \operatorname{cosine} \measuredangle \Bigl( \boldsymbol{a} \widehat{\;\;} \boldsymbol{b} \Bigr)
\\
\| \boldsymbol{a} \| \| \boldsymbol{b} \| \operatorname{cosine} \measuredangle \Bigl( \boldsymbol{a} \widehat{\phantom{w}} \boldsymbol{b} \Bigr)
\\
\| \boldsymbol{a} \| \| \boldsymbol{b} \| \operatorname{cosine} \measuredangle \Bigl( \boldsymbol{a} \widehat{\phantom{W}} \boldsymbol{b} \Bigr)
\end{gather}
test
$u^{\displaystyle\frac{n}{m}} = \Bigl( u^{(2k+1)n} \Bigr) \! ^{\displaystyle\frac{1}{(2k+1)m}}$
\begin{equation}
\begin{array}
\scriptstyle s \\
\scriptstyle i \\
\scriptstyle n
\end{array}
\left( x \right)
\end{equation}
\begin{array}
\scriptstyle s \\
\scriptstyle i \\
\scriptstyle n
\end{array}
\left( x \right)
\end{equation}
$r_\omega$
$r_{\omega+1}$
$y' = f(x)g(y)$
[math]y(x) = y_0 e^{\int\limits_{x_0}^x {a(\xi)d(\xi)}} + e^{\int\limits_{x_0}^x {a(\xi)d(\xi)}}\int\limits_{x_0}^x {b(s) e^{\int\limits_{s}^{x_0} a(\xi)d(\xi)} d(s)}[/math]
$A_1, A_2, ..., A_n, ...$ $A := \cup_{i=1}^{\infty} A_i$ $\lim_{m\to\infty}P(\cup_{i=1}^{m} A_i) = P(A)$
\begin{gather} \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2 - 4ac}{(2 a)^2} \\ \Bigl| x + \frac{b}{2 a} \Bigr| = \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} } \\ x + \frac{b}{2 a} = \pm \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} } \\ x = \pm \frac{\sqrt{b^2 - 4ac}}{2 a} - \frac{b}{2 a} \end{gather}
\begin{gather} ax^2 + bx + c = 0, \:\: a \neq 0 \\ \frac{a}{a} x^2 + \frac{b}{a} x + \frac{c}{a} = 0, \\ x^2 + 2 \frac{1}{2} \frac{b}{a} x + \frac{1}{2^2} \frac{b^2}{a^2} + \frac{c}{a} = \frac{1}{2^2} \frac{b^2}{a^2}, \\ \Bigl( x + \frac{1}{2} \frac{b}{a} \Bigr)^{\!2} = \frac{1}{4} \frac{b^2}{a^2} - \frac{c}{a}, \\ \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2}{4 a^2} - \frac{c}{a} \frac{4a}{4a}, \\ \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2}{4 a^2} - \frac{4ac}{4 a^2}, \\ \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2 - 4ac}{4 a^2}, \\
\Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2 - 4ac}{(2 a)^2}, \\ \Bigl| x + \frac{b}{2 a} \Bigr| = \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} }, \\
x + \frac{b}{2 a} = \pm \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} }, \\
x = \pm \frac{\sqrt{b^2 - 4ac}}{\sqrt{(2 a)^2}} - \frac{b}{2 a}, \\
x = \pm \frac{\sqrt{b^2 - 4ac}}{| 2 a |} - \frac{b}{2 a}, \\
x = \frac{\pm \sqrt{b^2 - 4ac}}{2 | a |} - \frac{b}{2 a}.
\end{gather}
\Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2 - 4ac}{(2 a)^2}, \\ \Bigl| x + \frac{b}{2 a} \Bigr| = \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} }, \\
x + \frac{b}{2 a} = \pm \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} }, \\
x = \pm \frac{\sqrt{b^2 - 4ac}}{\sqrt{(2 a)^2}} - \frac{b}{2 a}, \\
x = \pm \frac{\sqrt{b^2 - 4ac}}{| 2 a |} - \frac{b}{2 a}, \\
x = \frac{\pm \sqrt{b^2 - 4ac}}{2 | a |} - \frac{b}{2 a}.
\end{gather}
\begin{gather} ax^2 + bx + c = 0, \:\: a \neq 0, \\[.5em] \frac{a}{a} x^2 + \frac{b}{a} x + \frac{c}{a} = 0, \\[.2em] x^2 + 2 \cdot \frac{1}{2} \frac{b}{a} \cdot x + \frac{1}{2^2} \frac{b^2}{a^2} + \frac{c}{a} = \frac{1}{2^2} \frac{b^2}{a^2}, \\[.2em] \Bigl( x + \frac{1}{2} \frac{b}{a} \Bigr)^{\!2} = \frac{1}{4} \frac{b^2}{a^2} - \frac{c}{a}, \\[.2em] \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2}{4 a^2} - \frac{c}{a} \frac{4a}{4a}, \\[.2em] \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2}{4 a^2} - \frac{4ac}{4 a^2}, \\[.2em] \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2 - 4ac}{4 a^2}, \\[.2em] \Bigl( x + \frac{b}{2 a} \Bigr)^{\!2} = \frac{b^2 - 4ac}{(2 a)^2}, \\[.2em] \Bigl| x + \frac{b}{2 a} \Bigr| = \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} }, \\[.2em] x + \frac{b}{2 a} = \pm \sqrt{ \frac{b^2 - 4ac}{(2 a)^2} }, \\[.2em] x = \pm \frac{\sqrt{b^2 - 4ac}}{\sqrt{(2 a)^2}} - \frac{b}{2 a}, \\[.2em] x = \pm \frac{\sqrt{b^2 - 4ac}}{| 2 a |} - \frac{b}{2 a}, \\[.2em] x = \frac{\pm \sqrt{b^2 - 4ac}}{2 | a |} - \frac{b}{2 a}. \end{gather}
$(\frac{a_1}{b_1}, \frac{a_2}{b_2},\frac{a_3}{b_3}...) \mapsto \frac{p_{1}^{a_1}p_{2}^{a_2}p_{3}^{a_3}... }{p_{1}^{b_1}p_{2}^{b_2}p_{3}^{b_3}...}$
$\int{ \displaystyle\frac{A(x)dx}{B(x)\sqrt{S(x)}} }$
$D: V_1 \times V_2 \times...\times V_n \mapsto F$
$detA = \sum_{\sigma \in S_{n} \varepsilon(\sigma) \prod_{i=1}^{n}A_{i\sigma(i)} } $
$detA = \sum_{\sigma \in S_{n}} \varepsilon(\sigma) \prod_{i=1}^{n}A_{i\sigma(i)} } $
$detA = \sum_{\sigma \in S_{n}} \varepsilon(\sigma) \prod_{i=1}^{n}A_{i\sigma(i)} } $
$detA = \sum_{\sigma \in S_{n}} \varepsilon(\sigma) \prod_{i=1}^{n}A_{i\sigma(i) }$
>>85801 (OP)
\lim\limits_{x \to 2} \1/|x-2| = \infty
\lim\limits_{x \to 2} \1/|x-2| = \infty
${f_i}_n$.
$(e_i)_n$
$\sum_{i=1}^{n}A_{ii} \mapsto \sum_{i} \sum_{j} C_{ij}^{-1}A_{ji}$
$\sum_{i=1}^{n}A_{ii} \mapsto \sum_{i}^{n} \sum_{j}^{n} C_{ij}^{-1}A_{ji}$
$K_{v} = K_{v_\text{М}}\cdot K_{v_\text{И}}\cdot K_{v_\text{П}}\cdot K_{v_\text{С}}\cdot K_{v_\text{Ф}}\cdot K_{v_\text{О}}\cdot K_{v_\text{В}}\cdot K_{v_{\varphi}}$
>>93479
благодарю
благодарю
>>93479
\mid
\mathbin{|}
\,
\hspace{.5ex}
\hspace{.5em}
$b \cdot x \mid x \in \mathbb{Z}$
$b \cdot x \mathbin{|} x \in \mathbb{Z}$
$b \cdot x \, | \, x \in \mathbb{Z}$
$b \cdot x \hspace{.5ex} | \hspace{.5ex} x \in \mathbb{Z}$
$b \cdot x \hspace{.5em} | \hspace{.5em} x \in \mathbb{Z}$
\mid
\mathbin{|}
\,
\hspace{.5ex}
\hspace{.5em}
$b \cdot x \mid x \in \mathbb{Z}$
$b \cdot x \mathbin{|} x \in \mathbb{Z}$
$b \cdot x \, | \, x \in \mathbb{Z}$
$b \cdot x \hspace{.5ex} | \hspace{.5ex} x \in \mathbb{Z}$
$b \cdot x \hspace{.5em} | \hspace{.5em} x \in \mathbb{Z}$
[math]b_{n} = 1 + n \cdot b_{n-1}[/math]
[math]b_{0} = 2[/math]
[math]b_{0} = 2[/math]
$b_{n} = 1 + n \cdot b_{n-1}$
[math]\int {\frac{1}{{1 + \sqrt[3]{x}}}} [/math]
[math]3\int {\frac{{{t^2}}}{{t + 1}}dt = \frac{{3{t^2}}}{2} - 3t + \ln \left| {t + 1} \right|} [/math]
[math]t = \sqrt[3]{x}[/math]
$ \frac{dt}{dx} = \frac{1}{3x^{2/3}} = \frac{1}{3t^2} \\ $
$ dx = 3t^2 dt \\ $
$ \int \frac{dx}{1+x^{1/3}}= \int \frac{3t^2dt}{1+t} = 3 \int \frac{t^2dt}{1+t} = 3 \int t - \frac{t}{1+t}dt = 3 ( \int t dt - \int \frac{tdt}{1+t} ) = 3 ( \int t dt - \int (1 - \frac{1}{1+t}) dt) = \\ $
$ = 3(\int t dt - \int 1dt + \int \frac{d(1+t)}{1+t}) = \frac{3}{2}t^2 - 3t + \ln|1+t| + C = \\ $
$ = \frac{3}{2}x^{2/3} - 3x^{1/3} + \ln|1+x^{1/3}| + C $
$ dx = 3t^2 dt \\ $
$ \int \frac{dx}{1+x^{1/3}}= \int \frac{3t^2dt}{1+t} = 3 \int \frac{t^2dt}{1+t} = 3 \int t - \frac{t}{1+t}dt = 3 ( \int t dt - \int \frac{tdt}{1+t} ) = 3 ( \int t dt - \int (1 - \frac{1}{1+t}) dt) = \\ $
$ = 3(\int t dt - \int 1dt + \int \frac{d(1+t)}{1+t}) = \frac{3}{2}t^2 - 3t + \ln|1+t| + C = \\ $
$ = \frac{3}{2}x^{2/3} - 3x^{1/3} + \ln|1+x^{1/3}| + C $
a_{ij}
[math]a_{ij}[/math]
[math]\[x = {t^2} + 1\][/math]
[math]\[x = {t^2} + 1\][/math]*
[math]x = {t^2} + 1[/math]
1
На дваче рэндэр латекса работает или только в math ?
f $\sqrt[\delta ]{{\frac{{{a^b}}}{c}}}$
f \[\sqrt[\delta ]{{\frac{{{a^b}}}{c}}}]\
f \(\sqrt[\delta ]{{\frac{{{a^b}}}{c}}})\
f $\sqrt[\delta ]{{\frac{{{a^b}}}{c}}}$
f \[\sqrt[\delta ]{{\frac{{{a^b}}}{c}}}]\
f \(\sqrt[\delta ]{{\frac{{{a^b}}}{c}}})\
Ого, и давно тут латех вкрутили? Уважаемо моё почтение
>>85801 (OP)
$ \iiint div(F) dV $
$ \iiint div(F) dV $
\(\Feyn{fs f gl f glu f fs}\)
\frac{x_{n+1}}{x_{n}}
\frac{x_{n+1}}{x_{n}}
\frac{x_{n+1}}{x_{n}}
frac{x_{n+1}}{x_{n}}
\[
m_{воров}
\]
m_{воров}
\]
$m_{воров}$
$m_{воров}=\frac{F_{воров}}{a_{воров}}$
$$hello$$
$Hello guys, how are you doing?
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
\begin{equation}
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
\end{equation}
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
\begin{equation}
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
\end{equation}
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
$Hello guys, how are you doing?
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
\begin{equation}
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
\end{equation}
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
\begin{equation}
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
\end{equation}
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
$Hello guys, how are you doing?
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
$Hello guys, how are you doing?
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
I've been doing some finance math and stuck at one moment.
Let's we have a price of a stock S(0) at the time 0. We are dividing the continuous timeline between 0 and N into discrete steps 0, 1, ... N.
At each step the price S(n) can go up and down with corresponding logarithmic returns ln(1+u) and ln(1+d). Thus at each step we have a variable k(n) which is distributed as Bernoulli distribution with two outcomes:
k(n) = \left\{
\begin{array}{ll}
ln(1+u) & \quad p\\
ln(1+d) & \quad 1-p
\end{array}
\right.
where p=1-p
So if we consider all steps, we get a binomial tree, where each brunch represents possible movement of the stock's price S(0) up to S(N).
Now let's consider a random variable P = k(0) + ... + k(N). Each k(n) is i.i.d.
Its \mu is \sum_{n=1}^{N} E(k(n)) = N \cdot E(k(n)).
Its \sigma^2 is \sum_{n=1}^{N} var(k(n)) = N \cdot var(k(n))
Let's say that \tau = \frac{1}{N}
Hence, mean of each Bernoulli trial \mu{k(n)} is \mu \tau and \sigma of each trial is \sigma \sqrt{\tau}.
And here is a weird part - author then concludes that:
ln(1+u) = \mu\tau + \sigma\sqrt{\tau}
ln(1+u) = \mu\tau - \sigma\sqrt{\tau}
And it is not true i guess. According to his logic
ln(1+u) = mean + standard dev = p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2
and the right part of the expression cannot be boiled down down to ln(1+u),because:
p ln(1+u) + p ln(1+d) + p (ln(1+u)- \mu\tau)^2 + p(ln(1+d)-\mu\tau)^2 =\= ln(1+u)
$
$\rho_{i}(x) \geq r_{min}$
>>85801 (OP)
$x = e^{x^sin(x^2) - 1/\sqrt{x}}$
$x = e^{x^sin(x^2) - 1/\sqrt{x}}$
>>96568
$x = e^{x^{sin(x^2)} - 1/\sqrt{x}}$ конечно же
$x = e^{x^{sin(x^2)} - 1/\sqrt{x}}$ конечно же
∨
test m^2n
тест
$ x^2 \in \mathbb{R} $
$ x^2 \in \mathbb{R} $
$$ \begin{bmatrix}
A & B \\
C & D
\end{bmatrix} $$
A & B \\
C & D
\end{bmatrix} $$
$$ \begin{matrix}
A & B \\
C & D
\end{matrix}
\cdot
\begin{matrix}
0 & -1_n \\
1_n & 0
\end{matrix}
\cdot
\begin{matrix}
D & -B \\
-C & A
\end{matrix}
=
\begin{matrix}
0 & -1_n \\
1_n & 0
\end{matrix}
$$
A & B \\
C & D
\end{matrix}
\cdot
\begin{matrix}
0 & -1_n \\
1_n & 0
\end{matrix}
\cdot
\begin{matrix}
D & -B \\
-C & A
\end{matrix}
=
\begin{matrix}
0 & -1_n \\
1_n & 0
\end{matrix}
$$
$$ \begin{pmatrix}
A & B \\
C & D
\end{pmatrix}
\cdot
\begin{pmatrix}
0 & -1_n \\
1_n & 0
\end{pmatrix}
\cdot
\begin{pmatrix}
D & -B \\
-C & A
\end{pmatrix}
=
\begin{pmatrix}
0 & -1_n \\
1_n & 0
\end{pmatrix}
$$
A & B \\
C & D
\end{pmatrix}
\cdot
\begin{pmatrix}
0 & -1_n \\
1_n & 0
\end{pmatrix}
\cdot
\begin{pmatrix}
D & -B \\
-C & A
\end{pmatrix}
=
\begin{pmatrix}
0 & -1_n \\
1_n & 0
\end{pmatrix}
$$
[math] /!)-&~n/{"isRoot":false,"isTextMode":false,"isTabularCellsSelected":false,"isPureText":false,"insideInlineMath":false,"lines":[{"blocks":[{"text":"a"},{"text":"\\power-index","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]},"indexValue":{"lines":[{"blocks":[{"text":"n"}]}]}}},{"text":"="},{"text":"\\frac","type":"composite","elements":{"value":{"lines":[{"blocks":[{"text":"x"},{"text":"\\power","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]}}}]}]},"sub1":{"lines":[{"blocks":[{"text":"y"},{"text":"\\power","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]}}},{"text":"+"},{"text":"\\sum","type":"composite","elements":{"from":{"lines":[{"blocks":[{"text":"n-1"}]}]},"to":{"lines":[{"blocks":[{"text":"1"}]}]}}},{"text":"a"},{"text":"\\power-index","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]},"indexValue":{"lines":[{"blocks":[{"text":"i"}]}]}}}]}]}}}]}],"rootEditorId":"321241621323","inlineMathDisplayStyle":null} [/math]
$ /!)-&~n/{"isRoot":false,"isTextMode":false,"isTabularCellsSelected":false,"isPureText":false,"insideInlineMath":false,"lines":[{"blocks":[{"text":"a"},{"text":"\\power-index","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]},"indexValue":{"lines":[{"blocks":[{"text":"n"}]}]}}},{"text":"="},{"text":"\\frac","type":"composite","elements":{"value":{"lines":[{"blocks":[{"text":"x"},{"text":"\\power","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]}}}]}]},"sub1":{"lines":[{"blocks":[{"text":"y"},{"text":"\\power","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]}}},{"text":"+"},{"text":"\\sum","type":"composite","elements":{"from":{"lines":[{"blocks":[{"text":"n-1"}]}]},"to":{"lines":[{"blocks":[{"text":"1"}]}]}}},{"text":"a"},{"text":"\\power-index","type":"composite","elements":{"powerValue":{"lines":[{"blocks":[{"text":"2"}]}]},"indexValue":{"lines":[{"blocks":[{"text":"i"}]}]}}}]}]}}}]}],"rootEditorId":"321241621323","inlineMathDisplayStyle":null} $
$ a_{n}^{2} =\frac{x^{2}}{y^{2} +\sum _{1}^{n-1} a_{i}^{2}} $
[math] a_{n}^{2} =\frac{x^{2}}{y^{2} +\sum _{1}^{n-1} a_{i}^{2}} [/math]
>>98115
Нет.
Нет.
Let $a = |\mathbf{r_2} - \mathbf{r_3}|, b = |\mathbf{r_3} - \mathbf{r_1}|, c = |\mathbf{r_1} - \mathbf{r_2}|$. a=| mathbfr2−r3|,b=| mathbfr3−r1|,c=| mathbfr1− mathbfr2|. Then the position vector $\mathbf{r}$ of the incenter is given by \begin{align} \mathbf{r} = \frac{a\mathbf{r_1} +b\mathbf{r_2}+c\mathbf{r_3}}{a+b+c} \end{align} mathbfr инцентра задается \begin {align} \mathbf {r} = \frac {a\ mathbf {r_1} +b \mathbf {r_2} + c \ mathbf {r_3}} {a + b + c}\end{выровнять}a=|r2−r3|,b=|r3−r1|,c=|r1−r2|. Then the position vector r of the incenter is given by
Подскажите пожалуйста как сделать стрелочку с волной как на Пик1??
\stackrel{\backsim}{\rightarrow} даёт Пик2, но мне не нравится как это выглядит, очень не красиво
\stackrel{\backsim}{\rightarrow} даёт Пик2, но мне не нравится как это выглядит, очень не красиво
$\mathrm{F}(x, y)\triangleq \left\{ \begin{array}{ll} \mathrm{min}\left(x, y\right), & \mathrm{if} \ 0\le \mathrm{min}\left(x, y\right) \le 1 \\ 0, & \mathrm{if} \ \mathrm{min}\left(x, y\right)<0 \\ 1, & \mathrm{if} \ \mathrm{min}\left(x, y\right)>1 \end{array} \right.$
ну пофиксили там, не
я из-за нерабочего латеха на борду практически и не захожу теперь
$\beta_{2}$
я из-за нерабочего латеха на борду практически и не захожу теперь
$\beta_{2}$
>>98890
Тоже когда-то хотел, потом стал просто использовать \xrightarrow{\sim}. Теперь так нравится.
Можно так:
\overset{\raisebox{0.25ex}{$\sim\hspace{0.2ex}$}}{\smash{\longrightarrow}}
Или шрифт менять, чтобы головки у стрелок были поменьше.
Тоже когда-то хотел, потом стал просто использовать \xrightarrow{\sim}. Теперь так нравится.
Можно так:
\overset{\raisebox{0.25ex}{$\sim\hspace{0.2ex}$}}{\smash{\longrightarrow}}
Или шрифт менять, чтобы головки у стрелок были поменьше.
Holder's inequality is a fundamental result in mathematics that provides an upper bound for the product of two functions when their exponents are conjugate. It can be derived using the convexity of the function $x\mapsto x^p$ on the interval $[0,\infty)$, where $p>1$ is the exponent of one of the functions in the product.
To see how convexity plays a role, let $f$ and $g$ be non-negative measurable functions defined on a measure space $(X,\mu)$. Holder's inequality states that if $p$ and $q$ are conjugate exponents, i.e., $\frac{1}{p}+\frac{1}{q}=1$, then
,
where $|f|_p=(\int_X |f|^p d\mu)^{1/p}$ is the $L^p$ norm of $f$.
To derive this inequality using convexity, we first observe that the function $x\mapsto x^{1/p}$ is convex on $[0,\infty)$ since its second derivative is $(1-p)/p^2 x^{1/p-2}$, which is non-negative for $p>1$ and $x\geq 0$.
Using this fact, we can apply the convexity of the function $x\mapsto x^{p/q}$ to the integral of the product $fg$ as follows:
\begin{align}
\int_X fg \ d\mu &= \int_X (f^{1/p}g^{1/q})^p (f^{1/p}g^{1/q})^{q-p} \ d\mu \
&\leq \left(\int_X f^{p/q} \ d\mu\right)^{q/p} \left(\int_X g^{p/(p-q)} \ d\mu\right)^{(p-q)/p} \
&= |f|_p^q |g|_q^{p-q}.
\end{align}
Here, we used the fact that $q-p = \frac{-p}{q-p}$, and the inequality follows from the convexity of $x\mapsto x^{p/q}$ and the fact that $p/q+1/(p-q)=1$.
Thus, we have obtained the Holder's inequality using the convexity of the function $x\mapsto x^p$ and its inverse function $x\mapsto x^{1/p}$, which allow us to apply the general convexity inequality to the integrals of the powers of $f$ and $g$.
To see how convexity plays a role, let $f$ and $g$ be non-negative measurable functions defined on a measure space $(X,\mu)$. Holder's inequality states that if $p$ and $q$ are conjugate exponents, i.e., $\frac{1}{p}+\frac{1}{q}=1$, then
,
where $|f|_p=(\int_X |f|^p d\mu)^{1/p}$ is the $L^p$ norm of $f$.
To derive this inequality using convexity, we first observe that the function $x\mapsto x^{1/p}$ is convex on $[0,\infty)$ since its second derivative is $(1-p)/p^2 x^{1/p-2}$, which is non-negative for $p>1$ and $x\geq 0$.
Using this fact, we can apply the convexity of the function $x\mapsto x^{p/q}$ to the integral of the product $fg$ as follows:
\begin{align}
\int_X fg \ d\mu &= \int_X (f^{1/p}g^{1/q})^p (f^{1/p}g^{1/q})^{q-p} \ d\mu \
&\leq \left(\int_X f^{p/q} \ d\mu\right)^{q/p} \left(\int_X g^{p/(p-q)} \ d\mu\right)^{(p-q)/p} \
&= |f|_p^q |g|_q^{p-q}.
\end{align}
Here, we used the fact that $q-p = \frac{-p}{q-p}$, and the inequality follows from the convexity of $x\mapsto x^{p/q}$ and the fact that $p/q+1/(p-q)=1$.
Thus, we have obtained the Holder's inequality using the convexity of the function $x\mapsto x^p$ and its inverse function $x\mapsto x^{1/p}$, which allow us to apply the general convexity inequality to the integrals of the powers of $f$ and $g$.
>>85801 (OP)
[math]\mathcal{A_1}=\{"|"\}[\math]
[math]\mathcal{A_1}=\{"|"\}[\math]
>>101442
$$\mathcal{A_1}=\{"|"\}$$
$$\mathcal{A_1}=\{"|"\}$$
$\sum^{102}_{m=3} \binom{m}{3}^{-1}$
$a^2 + \sqrt{b}$
(a + b)^{10}= \frac{10ab}{1}(\frac{9ab}{2}(\frac{8ab}{3}(\frac{7ab}{4}(\frac{6ab}{5} + a^{2} + b^{2}) + a^{4} + b^{4})+a^{6} + b^{6})+a^{8} + b^{8})+a^{10} + b^{10}
\[(a + b)^{10}= \frac{10ab}{1}(\frac{9ab}{2}(\frac{8ab}{3}(\frac{7ab}{4}(\frac{6ab}{5} + a^{2} + b^{2}) + a^{4} + b^{4})+a^{6} + b^{6})+a^{8} + b^{8})+a^{10} + b^{10}\]
\[(a + b)^{10}= \frac{10ab}{1}(\frac{9ab}{2}(\frac{8ab}{3}(\frac{7ab}{4}(\frac{6ab}{5} + a^{2} + b^{2}) + a^{4} + b^{4})+a^{6} + b^{6})+a^{8} + b^{8})+a^{10} + b^{10}\]
\[(a + b)^{n}= \frac{nab}{1}(\frac{(n-1)ab}{2}...(\frac{(n+p+2)ab}{n-p}+a^{2-p} + b^{2-p})+a^{4-p} + b^{4-p})...)+a^{n} + b^{n}\]
\[(a + b)^{n}= \frac{nab}{1}(\frac{(n-1)ab}{2}...(\frac{(n+p+2)ab}{n-p}+a^{2-p} + b^{2-p})+a^{4-p} + b^{4-p})...)+a^{n} + b^{n}\]
>>101835
http://primat.org/mathred/mathred.html тебе нужно в этот сайт пихнуть, чтоб он слеши в начале-конце присунул
http://primat.org/mathred/mathred.html тебе нужно в этот сайт пихнуть, чтоб он слеши в начале-конце присунул
[math]\left\[\frac{k(k+1)}{2}\right\]^2 + (k+1)^3 = \frac{k^2 (k+1)^2}{4} + (k+1)^2 (k+1) = (k+1)^2 \left\(\frac{k^2}{4} + k + 1\right\)[/math]
$\dfrac{dy}{dx} = \dfrac{1-x}{y}$
Ты имеешь в виду
$\dfrac{dy}{dx} = \dfrac{1 - x}{y}$
или
$\dfrac{dy}{dx} = 1 - \dfrac{x}{y}$?
Потому что в первом случае разделение переменных работает:
$\int y \ dy = \int (1 - x) \ dx$
, а во втором случае разделить переменные не выйдет - оно является однородным, поэтому нужно будет ввести подстановку.
$\dfrac{dy}{dx} = \dfrac{1 - x}{y}$
или
$\dfrac{dy}{dx} = 1 - \dfrac{x}{y}$?
Потому что в первом случае разделение переменных работает:
$\int y \ dy = \int (1 - x) \ dx$
, а во втором случае разделить переменные не выйдет - оно является однородным, поэтому нужно будет ввести подстановку.
$$y_{a}=10\cdot a^{-\frac{a}{2a}}\left|1-e^{-\frac{5^{\frac{1}{a}}\left(x-a\right)}{25}}\right|$$
y_{a}=10\cdot a^{-\frac{a}{2a}}|1-e^{-\frac{5^{\frac{1}{a}}\left(x-a\right)}{25}}|
$$y_{a}=10\cdot a^{-\frac{a}{2a}}|1-e^{-\frac{5^{\frac{1}{a}}\left(x-a\right)}{25}}|$$
\(\bigcup_{i \in I}A_i = \{x: \exists x \in A_i\}\)
$$\bigcup_{i \in I}A_i = \{x: \exists x \in A_i\}$$
>>103186
Лютая формула, теория мамаши опа
Лютая формула, теория мамаши опа
$$ \int test dx $$
$ \int test dx $
[math] \int test dx [/math]
$$\mathbb{R}$$
$\sqrt[\leftroot{10} \uproot{5} 6]{27x^3}$
testtest
here is $\latex$ testtest
vtest $$\latex$$ vtest
here is $\latex$ testtest
> $\latex$
vtest $$\latex$$ vtest
[math]
$$
\forall \sigma \in S{_n} \;\;\; \sigma=\sigma{_1}\circ \dots\circ\sigma{_k} \;\;\text{произведение транспозиции}
$$
[/math]
$$
\forall \sigma \in S{_n} \;\;\; \sigma=\sigma{_1}\circ \dots\circ\sigma{_k} \;\;\text{произведение транспозиции}
$$
[/math]
[math]\forall \sigma \in S{_n} \;\; \sigma=\sigma{_1}\circ \dots\circ\sigma{_k}[/math] произведение транспозиции
\frac{1}{2}
$\frac{ℎ}{p}$
$$
\frac{2\sqrt{3 - a^2}}{a} = m
\frac{1}{a} + \frac{\sqrt{3 - a^2}}{a} = n
$$
$m$, получим $a = \frac{1}{n - \frac{m}{2}}$
$\frac{1}{n - \frac{\sqrt{3 - a^2}}{a}} = \frac{1}{n - \frac{m}{2}} = a$
\frac{2\sqrt{3 - a^2}}{a} = m
\frac{1}{a} + \frac{\sqrt{3 - a^2}}{a} = n
$$
$m$, получим $a = \frac{1}{n - \frac{m}{2}}$
$\frac{1}{n - \frac{\sqrt{3 - a^2}}{a}} = \frac{1}{n - \frac{m}{2}} = a$
$$
\frac{2\sqrt{3 - a^2}}{a} = m \\
\frac{1}{a} + \frac{\sqrt{3 - a^2}}{a} = n
$$
\frac{2\sqrt{3 - a^2}}{a} = m \\
\frac{1}{a} + \frac{\sqrt{3 - a^2}}{a} = n
$$
$\lim_{x \to \infty} (4 + \frac{1}{\infty})$
$\lim_{x \to \infty} 4 + \lim_{x \to \infty} \frac{1}{\infty})$
$\lim_{x \to \infty} 4 + \lim_{x \to \infty} \frac{1}{\infty})$
$\lim_{x \to \infty} (4 + \frac{1}{\infty})=
\lim_{x \to \infty} 4 + \lim_{x \to \infty} \frac{1}{x}=
4 + 0 = 4$
\lim_{x \to \infty} 4 + \lim_{x \to \infty} \frac{1}{x}=
4 + 0 = 4$
>>85801 (OP)
[math]f1 = 0 \& f3 = 1[/math]
[math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{f( k + 1 )3 - [ fk3 + f3 ][/math]
[math]f1 = 0 \& f3 = 1[/math]
[math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{f( k + 1 )3 - [ fk3 + f3 ][/math]
>>85801 (OP)
[math]f1 = 0 \& f3 = 1[/math]
[math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{f( k + 1 )3 - [ fk3 + f3 ]\}[/math]
[math]f1 = 0 \& f3 = 1[/math]
[math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{f( k + 1 )3 - [ fk3 + f3 ]\}[/math]
>>85801 (OP)
[math]f1 = 0 \quad \& \quad f3 = 1[/math]
[math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{ \quad f( k + 1 )3 - [ \quad fk3 + f3 \quad ] \quad \}[/math]
[math]f1 = 0 \quad \& \quad f3 = 1[/math]
[math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{ \quad f( k + 1 )3 - [ \quad fk3 + f3 \quad ] \quad \}[/math]
>>85801 (OP)
[math]f1 = 0 \quad \& \quad f3 = 1[/math], после по индукции доказал, что [math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{ \quad f( k + 1 )3 - [ fk3 + f3 ] \quad \}[/math]
[math]f1 = 0 \quad \& \quad f3 = 1[/math], после по индукции доказал, что [math]f( i + 1 )3 = ( i + 1 ) + \sum_{k = 1}^i\{ \quad f( k + 1 )3 - [ fk3 + f3 ] \quad \}[/math]
$k1k2k3...kn$
$k_1k_2k_3...k_n$
$k_n-1$
$k_{n-1}$
$\overlinek_1k_2k_3...k_{n-1}k_n$.
$\overline{k_1k_2k_3...k_{n-1}k_n}$.
[math]62 \binom{8}{2} 6160595857*56[/math]
[math]/sqrt {i} ^ 4[/math]
[math]/sqrt {i^4}[/math]
[math]/sqrt {i^4}[/math]
[math]\sqrt{i^4}[/math]
[math]\sqrt {i}^4[/math]
[math]\sqrt {i}^4[/math]
$$ \sqrt {i^4} $$
$$ \sqrt {i}^4 $$
$$ \sqrt {i}^4 $$
$ \frac{77}{99} $
[math]\sqrt{i^4}\neq\sqrt{i}^4[/math]
$\sqrt{i^4}\neq\sqrt{i}^4$
$ -\frac{sqrt{2}}{2}-i\frac{sqrt{2}}{2} $
$ -\frac{\sqrt{2}}{2}-i\frac{\sqrt{2}}{2} $
$-1$
$(a^b)^c=(a^c)^b=a^(b*c)$
$a^((b*c))$
>>115797
да!
да!
$1/frac {n}$
$1\frac {n}$
$1 \frac {n}$
$\frac {1}{n}$
$1^{\frac {1}{10}}$
$1^{{10} \times {\frac {1}{10}}}=1^1=1$
$1^{10}=1; (1^{10})^{\frac {1}{10}}=1^{\frac {1}{10}}$
$a$
$а$
$а$
>>115811
Не умеет?
Не умеет?
$\frac{7}{11}$
$a^{b \times c}$
$b^n=a$
[math] [ A = ( A \cap B ) \cup ( A \setminus B ) ] \& [ ( A \cap B ) \cap ( A \setminus B ) = \emptyset ] [/math]
[math]\textup{[(d)} \to \textup{(e)]}[/math]
[math]\textup{[(d)} \to \textup{(e)]}[/math]
[math] [ A = ( A \cap B ) \cup ( A \setminus B ) ] \& [ ( A \cap B ) \cap ( A \setminus B ) = \varnothing ] [/math]
[math]\mathrm{[(d)} \to \mathrm{(e)]}[/math]
[math]\mathrm{[(d)} \to \mathrm{(e)]}[/math]
q^N=Q^{-N}
x
x
>>85801 (OP)
[math]sout{frac{b}{a}}[/math]
[math]sout{frac{b}{a}}[/math]
$\mathbb{R}^{3}$
$$\forall \epsilon > 0 \exists \delta: \forall |x-x_0| < \delta \Lightarrow |f(x) - A| < \epsilon$$
1\ =\ 0\bigm(a+b)=a^2+b^2
$${{(a+b)^2=a^2+b^2}}$$
$${(a+b)^2=a^2+b^2}$$