0%

逆波兰表达式计算器代码(Chapter4)

逆波兰表达式

我们使用最为频繁的计算表达式其实是中缀表达式, 也就是操作符在两个操作数之间. 这个的缺点在于, 加减乘除的优先级并不是简单的从左到右计算的, 而是存在一定的优先级. 因此如果我们想要得到想要的结果, 就必须使用括号来辅助我们的计算, 而括号的嵌套则会使计算的复杂度大幅增加. 所以, 我们引入逆波兰表达式, 也就是所谓的后缀表达式, 其特点在于将操作符直接置于操作数之后. 其具体的用法如, $(3-4)5$ 的逆波兰表达式为 $3\ 4\ -\ 5\ $; $3-45$ 的逆波兰表达式为 $3\ 4\ 5\ \ -$. 逆波兰表达式的优点在于, 他不需要括号, 也不需要考虑优先级, 只需要按照顺序进行计算即可.

阅读全文 »

位运算操作符(2.9 Bitwise Operators)

在C语言中提供了六种位运算操作符, 分别为: 按位取与(&), 按位取或(|), 按位取异或(^), 按位左移()和按位取反(~). 值得注意的是这些操作符只能用于操作整数的数据类型,不论其是有符号整数还是无符号整数. 进一步, 整数类型的数据类型为 char, short, int 和 long (根据在计算机内占据的字节多少排序). 至于浮点数, 由于位运算会直接操作二进制位, 因此可能会对浮点数的值造成难以预知的结果, 因此从语法层面直接禁止了对浮点数的位运算操作.

阅读全文 »

Extern 的用法

在 C 语言中, 修饰符 extern 用在变量或者函数声明之前, 用来标识变量或者函数的定义在别的文件中已经给出, 告知编译器运行到此变量或函数时, 在其他位置寻找其定义. 其常见的用法如下:

阅读全文 »

在C语言程序设计这本书的第四章中提到了strindex 函数的编写; 其针对的问题是如何在主串(文本串)中找到某种类型的子串, 也就是我们常说的字符串匹配问题. 显然这个方法是存在朴素求解算法的, 在此我们将介绍除朴素求解算法以外的, 另一种优化的匹配算法, 即 KMP 算法和 KMP 算法的两种不同的实现思路.

阅读全文 »

Now we are ready to talk about building linear regression models from data. When we have gotten measurement data in experiments, it is important to construct a linear regression model as a predictive model. The basic structure of the linear regression is defined as follows,

where $A\in\mathbb{R}^{n\times m}$ and $b\in\mathbb{R}^{n}$ are given by the measurement. Therefore, building a linear regression is equal to solve the linear system $Ax=b$. Here we just consider the over determined system $n>m$ since it is often what we have in modern data. We will use the least square estimate to solve the vector $x$.

阅读全文 »

In many physics problem, we always encounter the solution of the linear equation $Ax=b$. Classically, we solve this linear equation in the case that $A$ is a square and invertible matrix. In fact, this case is too special to better model the real world. Based on the singular value decomposition which we have introduced, we can generalize this linear equation to the case that $A$ is a non-square matrix. Actually, when we consider the data analysis and data modeling, the matrix $A$ is always a non-square matrix.

阅读全文 »

Dominant Correlation

Here we introduce one of the most useful interpretation of the SVD. It is in terms of correlations among the columns of $X$ and correlations among the rows of $X$. We claim that the matrix $U$ and $V$ given by the SVD can be seen as the eigenvectors of a correlation matrix given by $XX^T$ or $X^TX$. Now we try to explain this claim.

阅读全文 »

Firstly, we introduce three different kinds of matrix norms and based on singular value decomposition, verify that these norms are only related to singular values.

  1. 2-norm (Spectral norm)
  1. Frobenius norm
  1. Nuclear norm
阅读全文 »

The SVD allows us to decompose data matrix $X$ as the product of three matrices, $U,\ V^T,\ \Sigma$, where essentially $U$ contains information about the column space of $X$, $V$ contains information about the row space of $X$ and $\Sigma$ is a hierarchically ordered diagonal matrix, which tells you how important the various columns of $U$ and $V$ are.

阅读全文 »