替身侍婢魅君心txt微盘:项目笔记 Project Notes

来源:百度文库 编辑:偶看新闻 时间:2024/04/29 18:02:59

Project Notes

Chen Zhifeng


|| Linux || Debian_Ubuntu || C/C++ || Python || Java || Tcl || GNU Radio || USRP || Socket Programming || QualNet || OPNET || NS-2 || MATLAB || H.264 || JM14.0 || FFMPEG ||

Linux:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:
make:
make is going to look for a file called Makefile, if not found then a file called makefile.

Linux 环境下 Makefile 文件制作浅谈(一)

Linux 环境下 Makefile 文件制作浅谈(二)

库可以有三种形式:静态、共享和动态。

静态库的代码在编译时就已连接到开发人员开发的应用程序中,而共享库只是在程序开始运行时才载入,在编译时,只是简单地指定需要使用的库函数。

动态库则是共享库的另一种变化形式。动态库也是在程序运行时载入,但与共享库不同的是,使用的库函数不是在程序运行开始,而是在程序中的语句需要使用该函数时才载入。动态库可以在程序运行期间释放动态库所占用的内存,腾出空间供其他程序使用。由于共享库和动态库并没有在程序中包括库函数的内容,只是包含了对库函数的引用,因此,代码的规模比较小。
已经开发的大多数库都采取共享库的方式。E L F格式的可执行文件使得共享库能够比较容易地实现,当然使用旧的a . o u t模式也可以实现库的共享。L i n u x系统中目前可执行文件的标准格式为E L F格式。


系统中可用的库都存放在/ u s r / l i b和/ l i b目录中。库文件名由前缀l i b和库名以及后缀组成。根据库的类型不同,后缀名也不一样。共享库的后缀名由 . s o和版本号组成,静态库的后缀名为. a。采用旧的a . o u t格式的共享库的后缀名为. s a。

----------------------------------------------------------------------------------------------------------------

Make工作的基本流程:
    --这时,make会自动检查相关文件的时间戳。首先,在检查“kang.o”、“yul.o”和“sunq”三个文件的时间戳之前,它会向下查找那些把“kang.o”或“yul.o”做为目标文件的时间戳。比如,“kang.o”的依赖文件为:“kang.c”、“kang.h”、“head.h”。如果这些文件中任何一个的时间戳比“kang.o”新,则命令“gcc –Wall –O -g –c kang.c -o kang.o”将会执行,从而更新文件“kang.o”。在更新完“kang.o”或“yul.o”之后,make会检查最初的“kang.o”、“yul.o”和“sunq”三个文件,只要文件“kang.o”或“yul.o”中的任比文件时间戳比“sunq”新,则第二行命令就会被执行。这样,make就完成了自动检查时间戳的工作,开始执行编译工作。这也就是Make工作的基本流程。
    --http://cublog.cn/u/884/showart_216369.html

递归展开方式的定义格式为:VAR=var

简单扩展方式的定义格式为:VAR:=var

configure.ac (sometimes also named: configure.in):
    --http://www.adp-gmbh.ch/misc/tools/configure/configure_in.html#ac_init

autotools生成Makefile流程图:    http://book.csdn.net/BookFiles/132/03/image012.gif
 

声卡
/dev/dsp 、/dev/dspW、/dev/audio:读这个设备就相当于录音,写这个设备就相当于放音。/dev/dsp与/dev/audio之间的区别在于采样的编码不同,/dev/audio使用μ律编码,/dev/dsp使用8-bit(无符号)线性编码,/dev/dspW使用16-bit(有符号)线形编码。/dev/audio主要是为了与SunOS兼容,所以尽量不要使用。

Useful Links:

鸟哥的私房菜:    http://linux-vbird.3322.org/

关于Autoconf, Automake, Makefile:    http://www.cngnu.org/technology/1657/297.html

POSIX 线程详解:    http://www.ibm.com/developerworks/cn/linux/thread/posix_thread1/index.html

back to top

 


Debian_Ubuntu:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:


Configure the microphone settings.
    --refer to http://www.vyvy.org/main/node/131

Somehow, Ubuntu does not seem to configure the microphone settings well out-of-box. Many have encountered many different problems when trying to capture the microphone input. (I've configured three machines, each with different problems) A few tips here.

    * Be sure that the microphone really works, is turned on, and has been plugged properly into the correct socket. Surprisingly this can be a common mistake.
    * Make sure that you are controlling the volume for the correct sound device.
    * Configure the following options properly: Microphone, Microphone Capture, Capture, Mic Boost (+20 dB) (or Mic Auto Gain), Mic Select, Surround Jack Mode. The signal may have been detected but the sound has been muted somehow.
    * Some useful text-based utilities:
        o aplay -l shows all soundcards and digital audio devices.
        o alsamixer is an 'ultimate' ncurses mixer program ALSA soundcard driver.
        o amixer is a command-line mixer for ALSA soundcard driver. Useful for presetting the volume settings if they do not persist after rebooting the machine. E.g., amixer get 'Capture', amixer set 'Capture' cap.
    * Can use krecord or gnome-sound-recorder to test the microphone. The 'Input Level' monitor of krecord is especially useful. Try to record from 'Microphone' or 'Capture'—Some machines work for both, but some work for only one of them.
 

Actions:

Q&A:

关于killall -HUP xinetd和service xinetd restart的区别
http://linux.chinaunix.net/bbs/archiver/?tid-832533.html

My understanding:

Useful Links:

Debian Documentation:    http://www.debian.org/doc/

debian中文:
/etc/apt/sources.list文件:    http://www.debian.org/doc/manuals/apt-howto/ch-basico.zh-cn.html

back to top

 


C/C++:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

 Reading Notes:

C++面向对象的编程入门篇-----类(class) 

结构中没有像类一样的如public这样的权限,结构中是不可以有函数的,而类中可以存在函数。

      继承性是面向对象程序设计语言不同于其它语言的最重要的特点,是其他语言所没有的。

       在类层次中,子类只继承一个父类的数据结构和方法,则称为单重继承。

      在类层次中,子类继承了多个父类的数据结构和方法,则称为多重继承。

 

  那么a就是test结构的一个对象(实例)

  test结构体内的成员可以叫做是分量(或者叫数据成员,或者叫属性)

子程序和函数都被称为方法

c语言中结构体中的各成员他们的默认存储控制是public c++中类的默认存储控制是private,所以在类中的成员如果需要外部掉用一定要加上关键字public声明成公有类型,这一特性同样使用于类中的成员函数,函数的操作方式和普通函数差别并不大

 
...............................................................................

类和结构体在内存布局方面

没有虚函数或者其祖先类不是虚函数的时候,类和结构体的内存布局基本是一样的。

你可以认为结构体是一个简单的类。

也就是说,你定义了一个类

class A

{

  int a;

  int b;

  int foo();

}

 

如果有构造函数,那么编译器会在创建类实例的时候插入构造函数的调用。

如果没有构造函数,与结构体无异。

 

在有虚函数的时候,编译器给类多做一个虚函数指针的空间。并强制生成构造函数,

并在构造函数里面将该类虚函数表的地址填充到虚函数指针的空间。

...............................................................................

下面我们要介绍一下域区分符(::)的作用:

1     利用域区分符我们可以在类定义的外部设置成员函数

2     域区分符和外部全局变量和类成员变量之间的关系

 

重载函数的定义:在相同的声明域中的函数名相同的,而参数表不同的,即通过函数的参数表而唯一标识并且来区分函数的一种特殊的函数。

  在类的private:节中声明的成员(无论数据成员或是成员函数)仅仅能被类的成员函数和友元访问。

  在类的protected: 节中声明的成员(无论数据成员或是成员函数)仅仅能被类的成员函数,友元以及子类的成员函数和友元访问。

在类的public:节中声明的成员(无论数据成员或是成员函数)能被任何人访问。
 

之所以要修饰成const static 因为c++中类成员只有静态整形的常量才能够被初始化

 类的作用域是只指定义和相应的成员函数定义的范围,在该范围内,一个类的成员函数对同一类的数据成员具有无限制的访问权

如果这个定义想不明白,可以简单的说成,在一个区域内,某一个名字在里面使用必须是唯一的,不能出现重复定义的情况出现,这个区域就是名字空间

关于重名

1.       一个名字不能同时设置为两种不同的类型

2.       非类型名(变量名,常量名,函数名,对象名,枚举成员)不能重名.

3.       类型与非类型不在同一个名字空间上,可以重名,即使在同一作用域内,但两者同时出现时定义类对象的时候要加上前缀class以区分类型和非类型名!

重载/模板/泛型
重载函数/overloaded function
模板(Template)指C++程序设计语言中的函数模板与类模板,大体对应于java和C#中的泛型。
模板定义:模板就是实现代码重用机制的一种工具,它可以实现类型参数化,即把类型定义为参数,从而实现了真正的代码可重用性。
    (对重载函数而言,C++的检查机制能通过函数参数的不同及所属类的不同。
    例如,为求两个数的最大值,我们定义MAX()函数需要对不同的数据类型分别定义不同重载版本。)

泛型程序设计/generic programming
泛型为.NET Framework引入了类型参数的概念,这样便可以设计出这样的类和方法:它们把指定类型的工作推迟到客户端代码声明并实例化类或方法的时候执行。


boost是一个准标准库,相当于STL的延续和扩充,它的设计理念和STL比较接近,都是利用泛型让复用达到最大化。
不过对比STL,boost更加实用。STL集中在算法部分,而boost包含了不少工具类,可以完成比较具体的工作。
Boost中的智能指针
http://www.stlchina.org/twiki/bin/view.pl/Main/BoostProgrammSmartPoint

----------------------------------------------------------------------------------------------------------------

引用(reference):
1. 为什么要用reference?
对于复杂的程序,使用指针容易出错,程序也难以读懂。在C++中,对于上述情况 可以使用引用来代替指针,使程序更加清晰易懂。
引用就是对变量取的一个别名,对引用进行操作,就相当于对原有变量进行操作。
2. 如何用reference?
在C语言中,如果一个函数需要修改用作参数的变量值的时候 ,参数应该声明为指针类型。例如:

  void Add(int *a) {(*a)++;}
使用引用的函数定义为:

  void Add(int &a) (a++;); //a为一个整数的引用

  这个函数与使用指针的上一个函数的功能是一样的,然而代码却更为简洁和清晰易懂。
 

newdelete:
1. 为什么要用new?
使用new较之使用malloc()有以下的几个优点:

  (1)new自动计算要分配类型的大小,不使用sizeof运算符,比较省事,可以避免错误。

  (2)自动地返回正确的指针类型,不用进行强制指针类型转换。

  (3)可以用new对分配的对象进行初始化。
2. 如何用new?
(1)int p;
    p=new int[10]; //分配一个含有10个整数的整形数组
    delete[] p; //删除这个数组

  (2)int p;
    p=new int (100);//动态分配一个整数并初始化

new后面可以直接跟构造函数?
--howto_square_ff* test_block = new howto_square_ff()

inline内连函数:
1. 为什么要用inline?
在C++中应该使用inline内连函数替代宏调用,这样既可达到宏调用的目的,又避免了宏调用的弊端。

内联是以代码膨胀(复制)为代价,仅仅省去了函数调用的开销,从而提高函数的
执行效率。如果执行函数体内代码的时间,相比于函数调用的开销较大,那么效率的收
获会很少。另一方面,每一处内联函数的调用都要复制代码,将使程序的总代码量增大,
消耗更多的内存空间。以下情况不宜使用内联:
(1)如果函数体内的代码比较长,使用内联将导致内存消耗代价较高。
(2)如果函数体内出现循环,那么执行函数体内代码的时间要比函数调用的开销大。

http://blog.csdn.net/wangzhanhang/archive/2004/07/06/35143.aspx


2. 如何用inline?
inline int Add(int a,int b);//声明Add()为内连函数
    -http://www.softexam.cn/eschool/details.asp?id=10856


友元
1. 为什么要用友元?
可以让一些你设定的函数能够对这些保护数据进行操作,避免把类成员全部设置成public,最大限度的保护数据成员的安全。
2. 如何用友元?
在类里声明一个普通憨数学,在前面加上friend修饰,那么这个函数就成了该类的友元,可以访问该类的一切成员。
友元函数并不能看做是类的成员函数,它只是个被声明为类友元的普通函数,所以在类外部函数的定义部分不能够写成void Internet::ShowN(Internet &obj),这一点要注意。

一个普通函数可以是多个类的友元函数
一个类的成员函数函数也可以是另一个类的友元,从而可以使得一个类的成员函数可以操作另一个类的数据成员
整个类也可以是另一个类的友元,该友元也可以称做为友类。友类的每个成员函数都可以访问另一个类的所有成员。
    --http://tech.163.com/05/0405/14/1GJ664RL00091589.html
 

const定义:
const int * Constant2

declares that Constant2 is variable pointer to a constant integer and

int const * Constant2

is an alternative syntax which does the same, whereas

int * const Constant3

declares that Constant3 is constant pointer to a variable integer and

int const * const Constant4

declares that Constant4 is constant pointer to a constant integer. Basically ‘const’ applies to whatever is on its immediate left (other than if there is nothing there in which case it applies to whatever is its immediate right).
    --http://duramecho.com/ComputerInformation/WhyHowCppConst.html

虚函数
虚函数是C++中用于实现多态(polymorphism)的机制。核心理念就是通过基类访问派生类定义的函数。
如果a指向的是A类的实例,则A::foo()被调用,如果a指向的是B类的实例,则B::foo()被调用。
这种同一代码可以产生不同效果的特点,被称为“多态”。
    --http://www.programfan.com/article/showarticle.asp?id=2782

函数模板函数重载:
函数的重载是指的你定义了几个名字相同,但是参数的类型或者是参数的个数不同(当然也可以都不同)的函数。但是函数的名字相同
而模版函数是指的几个相同实现(函数的具体算法)相同,而参数不同的函数。

 

Q&A:

什么时候用头文件 / include,什么时候用extern?

    在.h文件中声明的函数,如果在其对应的.c文件中有定义,那么我们在声明这个函数时,不使用extern修饰符, 如果反之,则必须显示使用extern修饰符.

    --http://blog.ednchina.com/tianlebo/23258/message.aspx

当函数提供方单方面修改函数原型时,如果使用方不知情继续沿用原来的extern申明,这样编译时编译器不会报错。但是在运行过程中,因为少了或者多了输入参数,往往会照成系统错误,这种情况应该如何解决?

  目前业界针对这种情况的处理没有一个很完美的方案,通常的做法是提供方在自己的xxx_pub.h中提供对外部接口的声明,然后调用方include该头文件,从而省去extern这一步。以避免这种错误。

    --http://hi.baidu.com/ice_water/blog/item/2119a06ef9ee27d281cb4a9a.html

总结:尽量用include头文件以避免函数原型在未知情的情况下被修改。头文件中用不用extern因人而异。第一个链接中对应*.c的头文件不要extern(理由是缺省就是extern而且可以知道在那个文件定义),第二个链接中对应*.c的头文件要extern(理由是用不用没有区别而且习惯问题)。

1)所有定义在.c文件中的函数在对应的头文件都不要extern;2)尽量包含头文件而不是用extern以避免函数原型在未知情的情况下被修改;3)实在不想include某个头文件中所有的extern函数(或那个头文件与本文件中的某些定义可能存在冲突?)时用extern

What is the difference between fopen and open in C?
    open() is the Unix system call; fopen() is the standard C function
    A Unix implementation will use open() in the implementation of fopen(), but the stream returned by fopen() provides buffering and works with functions like printf().
    Unless you want to take advantage of system-specific features, stick with fopen().
        --http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2008-05/msg01629.html
    f = fopen (Filename, "wb"); p_in=open(input->infile, OPENFLAGS_READ)


how to use fscanf?
http://topic.csdn.net/u/20071122/22/48282C35-3317-4B3A-B101-77A84C019718.html

My understanding:

我的理解:    因为重载,所以模板
        模板屏蔽掉了数据类型
        C++核心之一:共享,共享,再共享!!!

Useful Links:

挑战30天 C/C++ 入门极限系列教程:  

http://www.pconline.com.cn/pcedu/specialtopic/050514cpp/index.html
 

back to top


Python:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

类本身也是对象,实际上这里对象的含义比较宽:在Python 中所有的数据类型都是对象
    因为如同C++和Modula-3而不同于Smalltalk,Python的数据类型不都是类,比如基本内置类型整数、列表等不是类,甚至较古怪的类型如文件也不是类

作用域:Scope
递归调用:recursive invocations
另外,在Python中可以把任何一个在句点之后的名字称为属性


对于Python来说,程序、脚本或者软件都是指同一个东西
如果你想要为一个定义在函数外的变量赋值,那么你就得告诉Python这个变量名不是局部的,而是 全局 的。我们使用global语句完成这一功能。没有global语句,是不可能为定义在函数外的变量赋值的。

你可以使用定义在函数外的变量的值(假设在函数内没有同名的变量)。然而,我并不鼓励你这样做,并且你应该尽量避免这样做,因为这使得程序的读者会不清楚这个变量是在哪里定义的。

属于一个对象或类的变量被称为域。对象也可以使用 属于 类的函数来具有功能。这样的函数被称为类的方法。这些术语帮助我们把它们与孤立的函数和变量区分开来。域和方法可以合称为类的属性。

类/对象可以拥有像函数一样的方法,这些方法与函数的区别只是一个额外的self变量。
 

__init__方法在类的一个对象被建立时,马上运行
最重要的是,我们没有专门调用__init__方法,只是在创建一个类的新实例的时候,把参数包括在圆括号内跟在类名后面,从而传递给__init__方法。这是这种方法的重要之处。
__init__方法类似于C++、C#和Java中的 constructor/同样,注意__del__方法与 destructor 的概念类似
记住,你只能使用self变量来参考同一个对象的变量和方法。这被称为 属性参考 

The '__init__.py' files are required to make Python treat the directories as packages.
 
继承
一个子类型在任何需要父类型的场合可以被替换成父类型,即对象可以被视作是父类的实例,这种现象被称为多态现象
在上述的场合中,SchoolMember类被称为 基本类 或 超类 。而Teacher和Student类被称为 导出类 或 子类 。
我们注意到基本类的__init__方法专门使用self变量调用,这样我们就可以初始化对象的基本类部分。这一点十分重要——
Python不会自动调用基本类的constructor,你得亲自专门调用它。
Python总是首先查找对应类型的方法,在这个例子中就是如此。如果它不能在导出类中找到对应的方法,它才开始到基本类中逐个查找。
一个术语的注释——如果在继承元组中列了一个以上的类,那么它就被称作 多重继承 。

列表/List
只不过在你的购物表上,可能每样东西都独自占有一行,而在Python中,你在每个项目之间用逗号分割。
列表中的项目应该包括在方括号中

元组/Tuple
元组和列表十分类似,只不过元组和字符串一样是 不可变的 即你不能修改元组。
我们可以通过一对方括号来指明某个项目的位置从而来访问元组中的项目,就像我们对列表的用法一样。这被称作 索引 运算符。
含有单个元素的元组就不那么简单了。你必须在第一个(唯一一个)项目后跟一个逗号,这样Python才能区分元组和表达式中一个带圆括号的对象。
即如果你想要的是一个包含项目2的元组的时候,你应该指明singleton = (2 , )。

字典/Dictionary或Hash
字典类似于你通过联系人名字查找地址和联系人详细情况的地址簿,即,我们把键(名字)和值(详细情况)联系在一起。
注意,键必须是唯一的,就像如果有两个人恰巧同名的话,你无法找到正确的信息。
注意,你只能使用不可变的对象(比如字符串)来作为字典的键,但是你可以不可变或可变的对象作为字典的值。
James: Different from List and Tuple,
    记住字典中的键/值对是没有顺序的。如果你想要一个特定的顺序,那么你应该在使用前自己对它们排序。

序列/Sequences
序列的两个主要特点是索引操作符和切片操作符。索引操作符让我们可以从序列中抓取一个特定项目。
切片操作符让我们能够获取序列的一个切片,即一部分序列。
 

The `*' before the argument points is a special feature of Python's function. It means the function can be called with an arbitrary number of arguments.

----------------------------------------------------------------------------------------------------------------

how to get help for python:
http://swaroopch.info/text/Byte_of_Python:First_Steps#Getting_Help

to see help for statements like print
    you need to set the PYTHONDOCS environment variable:
        linux: export PYTHONDOCS=/Users/swaroop/Documents/Python/Docs/
        windows: set PYTHONDOCS=D:\GNURadio\Python-Docs-2.5
    then help('print')

to see how to use help in python:
    help()

to see the file comment:
    __doc__ attribution is like help in matlab

to see the file attribution list:
    dir() function: it returns the list of names defined in that module./内置函数dir()用于列出一个模块所定义的名字,它返回一个字符串列表
    for example: dir() return ['__builtins__', '__doc__', '__name__']/没有自变量时,dir()列出当前定义的名字。
    dir()不会列出内置函数、变量的名字。要想列出内置名字的话需要使用标准模块__builtin__:
        >>> import __builtin__
        >>> dir(__builtin__)
        --refer to: http://www.5anet.com/index.php?module=article&action=showarticle&id=22586

to make object into a string:
    str()

to use a string attribution in a object:
    __str__: For example, in our program we say print member and we need a way of printing that object, so Python calls the __str__ method of that object and promptly prints the output of that method to the screen.
    --refer to http://swaroopch.info/text/Byte_of_Python:Object_oriented_programming#How_It_Works_5

The os module contains operating-system specific functionality.

The sys module contains functionality related to the Python system and its environment.
............................................................................................................................................................................

Variables that belong to an object or class are called fields / 域. Fields are of two types - they can belong to each instance/object of the class or they can belong to the class itself. They are called instance variables and class variables respectively.

functions are called methods of the class.

The self is equivalent to the this pointer in C++ and the this reference in Java and C#.

If we're looking for an object to represent a window on the computer screen, don't look at wx.Window, look at wx.Frame instead. wx.Frame is derived from wx.Window.
    --http://www.nd.edu/~jnl/sdr/docs/tutorials/8.html

我们把所有可能引发错误的语句放在try块中,然后在except从句/块中处理所有的错误异常
如果某个错误或异常没有被处理,默认的Python处理器就会被调用。它会终止程序的运行,并且打印一个消息
    --http://www.woodpecker.org.cn:9081/doc/abyteofpython_cn/chinese/ch13s02.html

sys.exit() and os.abort()的区别
    --http://purpen.javaeye.com/blog/post/268043

Q&A:

Q: how to set the pythonpath in windows OS?
A: http://www.cse.clrc.ac.uk/qcg/ccp1gui/faq.shtml#h4install_winpaths

Q: when template library is compiled, whether they get memory allocation?
A: my answer is no, I think memory is allocated only when object is created.

how to use point:
    mylist = shoplist #only point to shoplist
    mylist = shoplist[:] #copy whole memory from shoplist
    refer to http://swaroopch.info/text/Byte_of_Python:Data_Structures#References

how to input and output:
    raw_input / print

how to transform type:
    we used int('5') to get the integer 5!

how to operate a file:
    refer to http://swaroopch.info/text/Byte_of_Python:Input_Output#Files
        f = file('poem.txt','w') # open for 'w'riting
        f.write(poem) # write text to file
        f.close() # close the file

        f = file('poem.txt') # if no mode is specified, 'r'ead mode is assumed by default
        while True:
            line = f.readline()
            if len(line) == 0: # zero length indicates end-of-file
                break
            print line, # notice comma to avoid automatic line breaks
        f.close() # close the file
        #alternative read
        for line in file('poem.txt'):
            print line,

how to use import:
    import cPickle as pickle
    --http://swaroopch.info/text/Byte_of_Python:Input_Output#Pickle

how to make comment:
    ''' and """ are the same, just like /* in C
    --refer to http://swaroopch.info/text/Byte_of_Python:Input_Output#Using_Triple_Quotes
 

My understanding:

对象里面的函数/方法好处:若有字符串和整型两种类型的删除函数方法不同,这时候误输入整型给字符串的删除函数时,为了避免错误,编译器需要进行判断类型.如果用对象的方法的话,编译器就不需要判断,编译时就不会通过.
C++和Python的区别:
1 C++类的变量缺省定义是private,Python是public
2 C++类有构造函数,python类的函数定义在内部

可以直接对对象/例子用.来增加变量,而不增加类的变量!!!
self告诉对象的方法--对象的名称
__name__对应模块--的name属性
__main__对应执行--的命令行输入值

Useful Links:

English:

Matlab in Python:    http://matplotlib.sourceforge.net

Byte of Python:    http://swaroopch.info/text/Byte_of_Python:Main_Page

Dive Into Python --Python from novice to pro:    http://diveintopython.org/

Python Library Reference:
http://docs.python.org/lib/
http://www.python.org/doc/current/lib/lib.html

Chinese:

Python入门: http://hepg.sdu.edu.cn/Chinese_2003/service/computer/users_guide/linux/Python/tutorial_chi/python-tutorial.html

Python基础篇: http://www.ringkee.com/note/python/basic.htm 优点: 所有的内容在一个页面

简明 Python 教程: http://www.woodpecker.org.cn:9081/doc/abyteofpython_cn/chinese/index.html

Python 研究(Dive Into Python) --Python 从新手到高手 [DIP_5_4_CPUG_RELEASE]
http://www.woodpecker.org.cn/diveintopython/toc/index.html


Python库参考手册(Python Library Reference)
http://www.chinesepython.org/pythonfoundry/lib2.3/tmp/

Python 参考手册/(中英文对照版)
http://pythoncn.freezope.org/py_doc/ref_cn/index.html

wxPython 入门: http://www-128.ibm.com/developerworks/cn/linux/sdk/python/wxpy/index.html

啄木鸟 Python 开源社区: http://wiki.woodpecker.org.cn/moin

Drpython下载: http://sourceforge.net/projects/drpython

back to top


Java:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

back to top


Tcl:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

back to top


GNU Radio:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

gr is an important sub-package of gnuradio, which is the core of the GNU Radio software. The type of 'flow graph' classes is defined in gr and it plays a key role in scheduling the signal flow.

The module audio provides the interfaces to access the sound card, while usrp provides the interfaces to control the USRP board.audio and usrp are often used as the signal source and sink.

blksimpl provides the implementation of several useful applications, such as FM receiver, GMSK, etc. For this example, the real signal processing part of the FM receiver is performed in this package.

By common sense, a graph should consist of vertices as well as the edges connecting them. These two basic elements are defined as endpoint class and edge class in basic_flow_graph.py
One block may have multiple ports, corresponding to different vertices in the graph.

The most distinguishing parts that flow_graph adds to the basic_flow_graph are the methods that control the running of the flow graph, such as start(), stop(), wait() run(), etc. As indicated by their names, these methods are designed to actually drive the signal flows to move on or stop in the graph. scheduler plays a key role in these methods.
 


baseband pulse shaping filter for BPSK modulator
Baseband filtering of phase-modulated signals suffers from the disadvantage of introducing spikes into the RF spectrum [2]
refer to: CCSDC-SFCG Efficient Modulation Study, A Comparison of Modulation Schemes Phase 1-3, May 1993-1995.
http://rfdesign.com/mag/radio_simulation_realization_baseband/

Multiple samples per symbol on the modulation side to assist the pulseshaping filters and....
http://forums.ni.com/ni/board/message?board.id=290&message.id=429

Using PLLs to Obtain Carrier Synchronization
http://www.us.design-reuse.com/articles/article5187.html

Digital Receiver: Carrier Recovery
http://cnx.org/content/m10478/latest/#eq3

----------------------------------------------------------------------------------------------------------------

basic_flow_graph:
basic_flow_graph has only one attribute `edge_list', which is a List saving all the edges in the graph. It is initialized to be empty when a flow graph instance is created. The `list' will be modified only if we call the methods defined in the class basic_flow_graph, such as `connect' or `disconnect'.

flow_graph:
The most distinguishing parts that flow_graph adds to the basic_flow_graph are the methods that control the running of the flow graph, such as start(), stop(), wait() run(), etc. As indicated by their names, these methods are designed to actually drive the signal flows to move on or stop in the graph

gr_sync_block:
gr_sync_block is an important class derived from gr_block. It implements a 1:1 block with optional history and certain simplifications are made.
--http://www.nd.edu/~jnl/sdr/docs/tutorials/6.html

gr.hier_block:
gr.hier_block describes a series of blocks in tandem in a flow graph. It assumes that there is at most a single block at the head of the chain and a single block at the end of the chain. Either head or tail may be None indicating a sink or source respectively.
--http://www.nd.edu/~jnl/sdr/docs/tutorials/7.html

the difference between my two PCs (release 3.0 and 3.0.3):
1. /usr/local/lib/python2.4/site-packages/gnuradio/usrp.py
import usrp_prims ==> from usrpm import usrp_prims

history: Assume block computes y_i = f(x_i, x_i-1, x_i-2, x_i-3...) History is the number of x_i's that are examined to produce one y_i. This comes in handy for FIR filters, where we use history to ensure that our input contains the appropriate "history" for the filter. History should be equal to the number of filter taps.

Actions:

why need carrier tracking?? test the module without carrier tracking!

in the demo, TCP client doesn't close the connection, why server will stop receive?
may test by client sending several send()


Q&A:

Problems during installing GNU Radio in Ubuntu 7.04 (Feisty Fawn)

how to implement block? we don't set input in parameter, how it is set in the C++ code?
    --in hier_block.py there are: self.head = head_block and self.tail = tail_block
    --what is going on when fg.connect? how to implement real time when fg.start?
        --scheduler will call general_work() and other methods in the C++ class

in /home/james/gnuradio/gnuradio-examples/python/usrp/usrp_siggen.py:
    since waveform had been defined in:
        self.siggen = gr.sig_source_c (self.usb_freq (),
                                                        gr.GR_SIN_WAVE,
                                                        self.waveform_freq,
                                                        self.waveform_ampl,
                                                        self.waveform_offset)
    why the function def _configure_graph (self, type): set self.siggen.set_waveform (type)?
    then I go through gr_sig_source_c.cc to see how gr.sig_source_c is defined, and find a question:
        what is the meaning of :
            , d_sampling_freq (sampling_freq), d_waveform (waveform),...?
        it may be the initial of d_waveform.
 

----------------------------------------------------------------------------------------------------------------

we have more flexibility (and bandwidth) if we use complex (IQ) sampling?

what is frequency-specific complex filter coefficients? --from http://www.linuxjournal.com/article/7505
On a Pentium 4, computing sine and cosine takes on the order of 150 cycles. Given a 20M sample/sec input stream, we'd be burning up 20e6 * 150 = 3e9 cycles/sec merely computing sine and cosine!
The good news is there's a better way to implement the DDC in software. This technique, described by Vanu Bose, et al., in "Virtual Radios" (see Resources), allows us to run all of the computation at the decimated rate by rearranging the order of the operations and using frequency-specific complex filter coefficients instead of real coefficients.

should be 4MHz?
resulting in 8M complex samples/sec across the USB. This provides a maximum effective total spectral bandwidth of about 8MHz by Nyquist criteria.

why clock recovery
D/A sampling rate不一致造成frequency变化


PSK/FSK/ASK...modulate to 2.4GHz is equal to modulate to IF, say 12MHz, then up convert to 2.4GHz?

Friends And Related Function Documentation, what is the meaning of friend?
 

Q: where is from gnuradio_swig_python import *, which is imported by every gr_* module?
A: I find this file in linux, but not in windows. difference is I make the source code under linux, but no make in windows
 

ok!
1 how to produce the complex signal by reduce the frequency to half?
2 200MHz/32MHz? http://www.comsec.com/wiki?UniversalSoftwareRadioPeripheral

3 6MHz? http://www.comsec.com/wiki?UniversalSoftwareRadioPeripheral
4 more flexibility (and bandwidth) for complex sampling? http://www.nd.edu/%7Ejnl/sdr/docs/tutorials/4.html
5 how to select sampling rate, if the spectrum is infinite? as 802.11?

My understanding:

原因:开发环境不完善,gnuradio文档不全;
结果:出错时,要去猜什么原因,通过打印二进制文件来分析数据
没有板子
 

Useful Links:

Official website:    http://www.gnu.org/software/gnuradio/

step by step:    http://www.nd.edu/~jnl/sdr/docs/

gnuradio module usage:    http://webpages.charter.net/cswiger/modules.htm

----------------------------------------------------------------------------------------------------------------

suggested reading (very detail!):   

    http://gnuradio.org/trac/wiki/SuggestedReading

    http://www.nd.edu/~jnl/sdr/docs/tutorials/2.html

people using GNU Radio:    http://www.mail-archive.com/discuss-gnuradio@gnu.org/msg05762.html

SDR research links:    http://www-sop.inria.fr/rodeo/personnel/Thierry.Turletti/SoftwareRadio.html

Suggested Projects:    http://gnuradio.org/trac/wiki/GnuRadioToDo

mailing list bbs:    http://www.nabble.com/GnuRadio-f1878.html

back to top

 


USRP:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

Attentions:
Synchronizing all daughterboard LOs:
    * To modify these boards for coherent applications if you have USRP rev 4 boards:
    * Move R64 to R84, Move R142 to R153
        o This disables the daughterboard clocks
    * Move R35 to R36, Move R117 to R115
        o This connects the boards to the motherboard clock
        o These are all 0-ohm, so if you lose one, just short across the appropriate pads
    * Plug the board into side A of your USRP and execute one of the following commands to reprogram the dboard EEPROM:
        o usrp/host/apps/burn-db-eeprom -A -t flex_400_mimo_b --force
        o usrp/host/apps/burn-db-eeprom -A -t flex_900_mimo_b --force
        o usrp/host/apps/burn-db-eeprom -A -t flex_2400_mimo_b --force

Actions:

Q&A:

How to test USRP when you get it

Datasheets of the chips in the USRP

which version are these board? version 4?
    --may check here: http://comsec.com/wiki?USRPReleaseNotes
    --rev4.3

There is 20 dB of AGC on the USRP for RX and 20 dB of power control for TX. (from http://comsec.com/wiki?RfSections4USRP) why only 20? since we may set peak value as FFFF, so any value is ok?
    --from AD9862 datasheet, 20dB comes from analog

what function does the firmware perform?
    --firmware is runned in usb controller to control the board

# usrp is data source
src = usrp.source_c (0, decim) / src.set_rx_freq (0, IF_freq) / src.set_pga(0,20)
where is the method definition for set_rx_freq and set_pga?
    --refer ot tutorial 5: Actually all these methods are implemented using C++. The SWIG provides the interfaces between C++ and Python, so that we can call these functions directly in Python.
    --in usrp_basic.cc, usrp_basic_rx::set_pga()
    --in usrp_standard.cc, usrp_standard_rx::set_rx_freq()

why don't put IF_freq as a parameter of usrp.source_c like decim? can user set IF_freq for each rx channel? but need same decim for all down-sampling?
    --refer to Tutorial 4: The multiple RX channels (1,2, or 4) must all be the same data rate (i.e. same decimation ratio). The same applies to the 1,2, or TX channels, which each must be at the same data rate (which may be different from the RX rate).
    --set_decim_rate: Set decimator rate. rate MUST BE EVEN and in [8, 256]. The final complex sample rate across the USB is adc_freq () / decim_rate () * nchannels ()

what is the different between usrp.source_c (0, decim) (in tutorial 9) and usrp.set_decim_rate() (in tutorial 4)?
    --usrp.source_c (0, decim) initial a source class, and initial usrp1_source_base, which initial usrp_stand_rx.set_decim_rate()
    --usrp.set_decim_rate() call usrp_stand_rx.set_decim_rate()

where is usrp1.source_c in usrp.py?
tutorial 5: /usr/local/lib/python2.4/site-packages/gnuradio/usrp.py

what means 4 channels in tutorial 4? I think should be 2 input and 2 output channels, right?
    --Each of the 4 ADCs can be routed to either of I or the Q input of any of the 4 DDCs. This allows for having multiple channels selected out of the same ADC sample stream.

since USB itself is differential half-duplex, how can usrp support full-duplex? is it time division multiplex? if so, what is the time slice/delay for real time application?
    --transmit has higher priority

how to set USB for handling hotplug, non-root access, fixed device name.
    --http://www.hur.cn/stu/comp06/comp0602/200612/87608.html
    --Ubuntu uses udev for handling hotplug devices, and does not by default provide non-root access to the USRP.
        --http://gnuradio.org/trac/wiki/UbuntuInstall

----------------------------------------------------------------------------------------------------------------
why need CIC? Our standard FPGA configuration includes digital down converters (DDC) implemented with cascaded integrator-comb (CIC) filters. CIC filters are very high-performance filters using only adds and delays.

My understanding:

Useful Links:

USRP install guide: http://www.comsec.com/wiki?UsrpInstall

more FPGA details: http://comsec.com/wiki?RfSections4USRP
 

back to top


Socket Programming:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

for C/C++:
http://www.chinaunix.net/jh/25/48248.html

对于流式套接字你要作的是 send() 发 送数据。对于数据报式套接字,你按照你选择的方式封装数据然后使用 sendto()。
有两种字节排列顺序:重要的字节 (有时叫 "octet",即八 位位组) 在前面,或者不重要的字节在前面。前一种叫“网络字节顺序 (Network Byte Order)”。有些机器在内部是按照这个顺序储存数据,而另外 一些则不然。当我说某数据必须按照 NBO 顺序,那么你要调用函数(例如 htons() )来将它从本机字节顺序 (Host Byte Order) 转换过来。如果我没有 提到 NBO,那么就让它保持本机字节顺序。
htons()--"Host to Network Short"
  htonl()--"Host to Network Long"
  ntohs()--"Network to Host Short"
  ntohl()--"Network to Host Long"
假设你已经有了一个sockaddr_in结构体ina,你有一个IP地址"132.241.5.10"要储存在其中,你就要用到函数inet_addr(),将IP地址从点数格式转换成无符号长整型。注意,inet_addr()返回的地址已经是网络字节格式,所以你无需再调用 函数htonl()
有没有其相反的方法呢? 它可以将一个in_addr结构体输出成点数格式?这样的话,你就要用到函数 inet_ntoa().需要注意的是inet_ntoa()将结构体in-addr作为一 个参数

如果你想侦听进入的连接,那么系统调用的顺序可 能是这样
socket();
bind();
listen();
/* accept() 应该在这 */


for python:

http://www.python.org/doc/current/lib/module-socket.html


That raises the question of how the client will know that it has received the entire message sent by the server.
The answer is that recv() will return an empty string when that occurs. And in turn, that will occur when
the server executes close()

recv() will block if no data has been received 9 but the connection has not been closed
recv() will return an empty string when the connection is closed

how to setup server to support multi connection?
by select() (adapted by Windows) and poll() (not yet available in Windows)

Actions:

Q&A:

My understanding:

Useful Links:

My demo:    TCP_server    TCP_client    UDP_server    UDP_client

(since the demo is writen in Python, you may download the Python IDE here)

Tutorial on Network Programming with Python.pdf
 

back to top


QualNet:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

 

Actions:

Q&A:

My understanding:

Useful Links:

Comparison between NS2, Glomosim, OPNET, Matlab, GNU Radio

back to top


OPNET:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

 

Actions:

Q&A:

My understanding:

Useful Links:

back to top

 


NS-2:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

 

Actions:

Q&A:

My understanding:

Useful Links:

back to top

 

 


MATLAB:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

How to make latex symbols, such as \hat, \tilte, in MATLAB title?

legend({'$\hat{y}_0$','$\hat{y}_1$'},'Interpreter','latex');
    http://stackoverflow.com/questions/132092/what-are-your-favourite-matlab-octave-programming-tricks


 

Actions:

Q&A:

My understanding:

Useful Links:

back to top

 


H.264:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

coded video sequence --> Access units --> NAL unit


auxiliary coded picture: must contain the same number of macroblocks as the primary coded picture.
redundant coded picture: is not required to contain all macroblocks in the primary coded picture.

Flexible macroblock ordering (FMO), also known as slice groups, and arbitrary slice ordering (ASO), which are techniques for restructuring the ordering of the representation of the fundamental regions (macroblocks) in pictures.
    --WiKi

 

Intra mode: 9 for 4*4, 4 for 16*16
    For 4x4: In addition to “DC” prediction (where one value is used to predict the entire 4 4 block), eight directional prediction modes are specified as illustrated on the right-hand side of Fig. 10. [Overview of the H.264...]
    For 16x16: Predictionmode 0 (vertical prediction),mode 1 (hor-izontal prediction), and mode 2 (DC prediction) are specified similar to the modes in Intra_4 4 prediction except that instead of 4 neighbors on each side to predict a 4 4 block, 16 neighbors on each side to predict a 16 16 block are used. For the specification of prediction mode 4 (plane prediction), please refer to [1]. [Overview of the H.264...]
    For a 16×16 luma block, there are 4 other prediction modes: Mode 0 (vertical mode), Mode 1 (horizontal mode) Mode 2 (DC mode) and Mode 4 (plane mode), which is based on a linear spatial interpolation by using the upper and left-hand predictors of the MB.

Inter mode:
    Partitions with luma block sizes of 16 16, 16 8, 8 16, and 8 8 samples are supported by the syntax. In case partitions with 8 8 samples are chosen, one additional syntax element for each 8 8 partition is transmitted. This syntax element specifies whether the corresponding 8 8 partition is further partitioned into partitions of 8 4, 4 8, or 4 4 luma samples and corresponding chroma samples. Fig. 12 illustrates the partitioning. [Overview of the H.264/AVC Video Coding Standard]


Acronym:
Coded_Block_Pattern (CBP): Coded Block Pattern
http://ieeexplore.ieee.org/iel5/10549/33371/01579901.pdf
http://en.wikipedia.org/wiki/Macroblock

slice group:
Each slice group can also be divided in several slices
A slice can be decoded independently
each slice is transmitted independently in separate units called packets
http://en.wikipedia.org/wiki/Flexible_Macroblock_Ordering


EBSP: JVT WD [4], Section 8.1.2, defines the encapsulated byte sequence payload (EBSP). EBSP is basically same as RBSP, however, EBSP contains additional bytes for preventing start code emulation. The EBSP format is necessary if the decoder detects slice or picture boundaries by start codes. However, since NAL packets do not use start codes to specify the slice boundaries, the EBSP format is not necessarily used. Use of RBSP is bit efficient and facilitates protocol/format conversion.

 

对抗块效应滤波器:

应用抗块效应滤波器的目的是为了减少块失真。抗块效应滤波器是在编码器和解码器的反变换之后应用的。滤波器有两种好处:(a)平滑块边缘,改善解码图像质量(特别是在较高的压缩比时);(b)为了在编码器中对后面的帧进行运动补偿预测,使用滤波宏块,造成预测后产生一个较小的残差。操作过程是这样的:对帧内编码宏块进行滤波,使用未滤波的重建宏块形成预测帧,进行帧内预测,但整幅图像边缘不被滤波。

Actions:

Q&A:

what is the relationship among slice, RBSP and NAL?

My understanding:


rate_control:
same QP:
                                                                    / same quantization distortion
higher motion --> bigger residue data    - higher datarate
                                                                    \ higher channel distortion

adaptive QP(bigger QP for bigger residue data and motion vector):
                                                                    / higher quantization distortion
higher motion --> bigger residue data    - almost same datarate
                                                                    \ higher channel distortion

Useful Links:

Rate Control and H.264:    http://www.pixeltools.com/rate_control_paper.html

JM online documents:   

    encoder:    http://iphome.hhi.de/suehring/tml/doc/lenc/html/index.html

    decoder:    http://iphome.hhi.de/suehring/tml/doc/ldec/html/index.html

T264:    H.264网络传输中马赛克问题的解决

video sequence:    http://yufeng1684.bokee.com/6760108.html or 转载

 

 

back to top

 


JM14.0:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

-------------------------encoder-----------------------------------------

encoder architecture

encoding:
main()-->encode_one_frame()-->frame_picture()--code_a_picture(frame)-->encode_one_slice()-->start_macroblock()/encode_one_macroblock()
    void (*encode_one_macroblock) (Macroblock *currMB)==>void encode_one_macroblock_high (Macroblock *currMB);
        compute_mode_RD_cost(mode, currMB, enc_mb, &min_rdcost, &min_dcost, &min_rate, i16mode, bslice, &inter_skip);

packetization:
main()-->encode_one_frame()-->writeout_picture()-->writeUnit()

save decoded picture buffer (DPB):
main()-->encode_one_frame()-->store_picture_in_dpb(enc_frame_picture[0])-->insert_picture_in_dpb(dpb.fs[dpb.used_size],p)
    in dump_dpb(), we may set DUMP_DPB = 1 to get debug information!

write test_rec.yuv:
main()-->flush_dpb()-->output_one_frame_from_dpb()-->write_stored_frame(dpb.fs[pos], p_dec)-->write_picture(fs->frame, p_out, FRAME)-->write_out_picture(p, p_out)
    -->img2buf (p->imgY, buf, p->size_x, p->size_y, symbol_size_in_bytes, crop_left, crop_right, crop_top, crop_bottom)
    -->write(p_out, buf, (p->size_y-crop_bottom-crop_top)*(p->size_x-crop_right-crop_left)*symbol_size_in_bytes)
 

在JM14.0中有两个结构体比较重要:ImageParametersStorablePicture

    在global.h中定义: ImageParameters用于保存程序运算过程的图像参数

        1)imgpel mpr [MAX_PLANE][16][16];    用于保存预测的像素值

        2)int m7 [MAX_PLANE][16][16];    用于保存residue data的处理过程临时数据(注释写的有问题“the diff pixel values between the original macroblock/block and its prediction”)

        3)int ****cofAC; / int ***cofDC;    用于保存经过transform和quantization以后的宏块系数

        4)Slice *currentSlice;

            DataPartition *partArr;

                 Bitstream *bitstream;

                      byte *streamBuffer;    用于保存最终的编码结果,输出到test.264

 

    在mbuffer.h中定义:StorablePicture用于保存图像处理的结果

        imgpel ** imgY; / imgpel *** imgUV;    用于保存重构图像的像素值,输出到test_rec.yuv

        imgpel ** p_curr_img; ??

        short **** mv;    用于保存motion vector的值

 

这两个结构体各定义了一个全局变量用来保存图像的处理结果:

    在lencod.c中定义:ImageParameters images, *img = &images;

    在image.c中定义:StorablePicture *enc_picture;(其实还定义了StorablePicture **enc_frame_picture;但是enc_frame_picture[i]取决于rd_pass变量对i有不同的取值,在实验中只用到enc_frame_picture[0])

 

在zhifeng.c和zhifeng.h两个文件中用到img->mpr,enc_picture->imgY/enc_picture->imgUV和enc_picture->mv来得到所有像素的motion vector和residue data。

---------------------------------------------------

me_fullfast.c中的FastFullPelBlockMotionSearch()函数中:
mcost = block_sad[pos_00[list][ref]] + MV_COST_SMP (lambda_factor, 0, 0, pred_mv[0], pred_mv[1]);
是在根据RDOptimization做rate distortion optimazatin。虽然这里RateControlEnable=0,但是对于每p帧中任一宏块都有7中预测模式,理论上用4×4的block得到的distortion最小,但是rate最大,这条语句就是做两者的balance。
RateControlEnable的设置是对整个video sequence做的。


用-d "*.cfg"不用-f "*.cfg"!!!(-f好像是组合参数,lencod -f "encoder_baseline.cfg"对于没有修改的JM中的lencod运行出错)

可以用Bit_Buffer来计算每一帧的大小,但是好像不能计算slice的大小。
image.c:Bit_Buffer[total_frame_buffer] = (int) (stats->bit_ctr - stats->bit_ctr_n);


original图像:

                    / buf = malloc (xs*ys * symbol_size_in_bytes)
image.c-->ReadOneFrame()-->
                            \ read(p_in, buf, bytes_y) 
 

但是buf的值在ReadOneFrame()最后被释放:free (buf);
通过imgY_org_frm保存的是16位,高8位为0x00:
buf2img(imgY_org_frm_JV[0], buf, xs, ys, symbol_size_in_bytes);
buf2img(imgUV_org_frm[0], buf, xs_cr, ys_cr, symbol_size_in_bytes);
buf2img(imgUV_org_frm[1], buf, xs_cr, ys_cr, symbol_size_in_bytes);


重要参数:
stats->bit_ctr_parametersets_n:SPS和PPS的位数
    start_sequence()-->stats->bit_ctr_parametersets_n = len;
    encode_one_frame()-->stats->bit_ctr_parametersets_n=0;

stats->bit_ctr_n: 当前处理的帧之前的所有位数
    encode_one_frame()-->stats->bit_ctr_n = stats->bit_ctr;

stats->bit_ctr: 在ReportFirstframe()中清零!
    ReportFirstframe()-->stats->bit_ctr = 0;

ok:
output:
.cfg
ReportFrameStats = 0 # (0:Disable Frame Statistics 1: Enable)
DisplayEncParams = 0 # (0:Disable Display of Encoder Params 1: Enable)
Verbose = 1 # level of display verboseness (0:short, 1:normal, 2:detailed)

in dump_dpb(), we may set DUMP_DPB = 1 to get debug information!

全局变量:
frame_pic
    defined in global.h: Picture **frame_pic;
*img:
    defined in lencod.c: ImageParameters images, *img = &images;
    referenced in global.h: extern ImageParameters *img;


difference between JM13 and JM14:
there is no LoopFilterDisable in JM14, but deblocking filter!

-------------------------decoder-----------------------------------------

decoder architecture


运动补偿过程

预测mv:在预测一个宏块的mv时,相邻的a,b,c若不可用,则置零。
    mv_a = block_a.available ? tmp_mv[list][block_a.pos_y][block_a.pos_x][hv] : 0;
    mv_b = block_b.available ? tmp_mv[list][block_b.pos_y][block_b.pos_x][hv] : 0;
    mv_c = block_c.available ? tmp_mv[list][block_c.pos_y][block_c.pos_x][hv] : 0;
        decode_one_slice()-->read_one_macroblock()-->SetMotionVectorPredictor()

读入运动矢量:
curr_mv [k] = (short)(curr_mvd[k] + pred_mv[k]);
    decode_one_slice()-->read_one_macroblock()-->readMotionInfoFromNAL()-->readMBMotionVectors()

读入coefficients:
    read_one_macroblock()-->readCBPandCoeffsFromNAL()

执行mpr:(并没有加上residue)
memcpy(&(curr_mpr[0][ioff]), &(block[0][0]), hor_block_size * ver_block_size * sizeof(imgpel));
    decode_one_slice()-->decode_one_macroblock()-->perform_mc()-->mc_prediction()

执行transform:
inverse4x4()
    decode_one_slice()-->decode_one_macroblock()-->iTransform()-->iMBtrans4x4()-->itrans_4x4()

mpr加上residue
m7[j][i] = iClip1(max_imgpel_value, rshift_rnd_sf((m7[j][i] + ((long)mpr[j][i] << DQ_BITS)), DQ_BITS));
    decode_one_slice()-->decode_one_macroblock()-->iTransform()-->iMBtrans4x4()-->itrans_4x4()

 

mb_type defined in defines.h
enum {
PSKIP = 0,
BSKIP_DIRECT = 0,
P16x16 = 1,
P16x8 = 2,
P8x16 = 3,
SMB8x8 = 4,
SMB8x4 = 5,
SMB4x8 = 6,
SMB4x4 = 7,
P8x8 = 8,
I4MB = 9,
I16MB = 10,
IBLOCK = 11,
SI4MB = 12,
I8MB = 13,
IPCM = 14,
MAXMODE = 15
} MBModeTypes;



计算SNR
    snr->snr[k]=(float)(10*log10(max_pix_value_sqd[k]*(double)((double)(comp_size_x[k])*(comp_size_y[k]) / diff_comp[k])));


计算平均SNR
    snr->snra[k]=(float)(snr->snra[k]*(snr->frame_ctr)+snr->snr[k])/(snr->frame_ctr + 1); // average snr chroma for all frames
    read_new_slice()-->init_picture()-->exit_picture()-->store_picture_in_dpb()-->direct_output()-->find_snr()

丢包时的执行过程
设置标志位:
1)currSlice->dpC_NotPresent =1; @ if ((slice_id_c != slice_id_a)|| (nalu->lost_packets))
        read_new_slice()
2) currMB->dpl_flag = 1; @ if (IS_INTER (currMB) && currSlice->dpC_NotPresent )
        decode_one_slice()-->read_one_macroblock()-->readCBPandCoeffsFromNAL()
3) cbp = 0; / currMB->cbp = cbp; @ if (currMB->dpl_flag)
        decode_one_slice()-->read_one_macroblock()-->readCBPandCoeffsFromNAL()
修复:
read_new_slice()-->init_picture()-->exit_picture()-->ercConcealIntraFrame()-->concealBlocks()-->ercPixConcealIMB()-->pixMeanInterpolateBlock()
 

error concealment有两种
intra:
read_new_slice()-->init_picture()-->exit_picture()-->ercConcealIntraFrame()

inter:
read_new_slice()-->init_picture()-->exit_picture()-->ercConcealInterFrame()

根据erc_mvperMB和MVPERMB_THR的比较来选用怎么inter conceal
if(erc_mvperMB >= MVPERMB_THR)
    concealByTrial();
else
    concealByCopy();


erc_mvperMB的计算方法
8x4, 4x8, 4x4用平均值得到8x8的erc_mvperMB
16x16, 16x8, 8x16, 8x8直接赋值
然后累加成一个MB的erc_mvperMB。(若MVPERMB_THR设成8,则MB的平均MVx和MVy的和不能超过2,而mv的一个pixel值为4,所以JM14.0中把MVPERMB_THR设成8,MVx和MVy和不能超过0.5个pixel,基本上不执行concealByCopy())
    decode_one_slice()-->ercWriteMBMODEandMV()

如何根据设置来选择concealment
if (img->conceal_mode==1)
exit_picture()-->store_picture_in_dpb()-->output_one_frame_from_dpb()-->write_lost_ref_after_idr()-->copy_to_conceal()
但是在write_lost_ref_after_idr()之前poc的获取get_smallest_poc()@(while (dpb.used_size==dpb.size))好像有问题,在16帧以后才执行。然后dpb.last_output_poc = poc
另外:
dpb.size = getDpbSize()
    init_dpb()
size = imin( size, 16);
    init_dpb()-->getDpbSize()

Actions:

Q&A:

Why put WriteOneFrameMV() in main() in lencod.c file, while put WriteOneFrameResidue() right after encode_one_macroblock() in encode_one_slice() in the slice.c file?

    1) Since img->mpr only contains one Macroblock data, so it can not be put in lencod.c like WriteOneFrameMV() where enc_picture->mv contains one frame data.

        img->mpr is changed in encode_one_macroblock_low()-->LumaResidualCoding()/(Line617)-->LumaResidualCoding8x8()-->LumaPrediction(): memcpy(&(curr_mpr[j][block_x]), l0pred, block_size_x * sizeof(imgpel));

    2) The best place to put WriteOneFrameResidue() is right after encode_one_macroblock() because enc_picture->imgY/enc_picture->imgUV is changed in three locations:

        a) encode_one_macroblock_low()-->LumaResidualCoding()/(Line617)-->LumaResidualCoding8x8()-->pDCT_4x4()/(Line901)<-->dct_4x4()-->SampleReconstruct(): *imgOrg++ = iClip1( max_imgpel_value, rshift_rnd_sf(*m7++, dq_bits) + *imgPred++);

        b) encode_one_macroblock_low()-->LumaResidualCoding()/(Line617)-->LumaResidualCoding8x8(): memcpy(&enc_picture->imgY[img->pix_y + j][img->pix_x + mb_x], &img->mpr[0][j][mb_x], 2 * BLOCK_SIZE * sizeof(imgpel));

        c) encode_one_macroblock_low()-->LumaResidualCoding()/(Line617): memcpy(&enc_picture->imgY[img->pix_y+j][img->pix_x], img->mpr[0][j], MB_BLOCK_SIZE * sizeof (imgpel));

        So, we may put WriteOneFrameResidue() right after LumaResidualCoding(), however to generally support other encode_one_macroblock mrthods, we put WriteOneFrameResidue() right after encode_one_macroblock().

 

Why not use img->m7 as residue data ouput?

    Although img->m7 is used as a temporary variable for processing residue data, but it is the never accurate residue data value. img->m7 is changed in two locations:

        a) encode_one_macroblock_low()-->LumaResidualCoding()/(Line617)-->LumaResidualCoding8x8()-->ComputeResidue(): *m7++ = *imgOrg++ - *imgPred++; (where m7 has not been quantized)

        b) after forward4x4() and quant_4x4(),

            If there is nonzero coefficient, img->m7 is changed in inverse4x4() and the value should be rshift_rnd_sf() to get residue data as in SampleReconstruct().

            If there is no nonzero coefficient, img->m7 is invalid because only predicted value is used for enc_picture->p_curr_img.

 

If no residue data output in the global structure, how does JM encode residue data

    The residue data is needed only after transform and quantization in H.264 decoder, so JM14.0 does not keep original residue data in global variable.

    The one used for H.264 decoder is defined in img->cofAC/img->cofDC:

        which are changed in quant_4x4(),<-->quant_4x4_around() (int* ACLevel = img->cofAC[b8][b4][0]; defined in dct_4x4())

        which are used to be VLC coded in main()-->encode_one_frame()-->frame_picture()-->code_a_picture(frame)-->encode_one_slice()-->write_one_macroblock()-->writeMBLayer()-->writeCoeff16x16()-->writeCoeff8x8()-->writeCoeff4x4_CAVLC()

 

test_rec.yuv通过什么变量写入?
        dpb.fs[pos]: write_stored_frame(dpb.fs[pos], p_dec)

        enc_frame_picture[0]:
                初始化地址:prepare_enc_frame_picture( &enc_frame_picture[0] )<--frame_picture()
                        1)通过get_mem2Dpel (&(s->imgY), size_y, size_x),给s->imgY赋地址,s是alloc_storable_picture的返回值。
                        2)通过(*stored_pic) = alloc_storable_picture ((PictureStructure) img->structure, img->width, img->height, img->width_cr, img->height_cr)给函数prepare_enc_frame_picture的参数StorablePicture **stored_pic赋地址
                        注意:不是分配内存,所以不用对enc_frame_picture[0]赋值,而是用enc_frame_picture[0]作为函数prepare_enc_frame_picture的参数来赋地址!
                写地址内容:encode_one_macroblock_high()-->compute_mode_RD_cost()-->RDCost_for_macroblocks()
                        -->Intra16x16_Mode_Decision()-->currMB->cbp = dct_16x16 (currMB, PLANE_Y, *i16mode)-->img_Y[i] = iClip1( max_imgpel_value, rshift_rnd_sf(M1[j][i], DQ_BITS) + predY[i])这里一次向&enc_frame_picture[0]写两个字节的内容
                                compute_mode_RD_cost()一次写16次(的两个字节)
                        -->currMB->cbp = Mode_Decision_for_Intra4x4Macroblock (currMB, lambda, &dummy_d)-->Mode_Decision_for_8x8IntraBlocks()-->Mode_Decision_for_4x4IntraBlocks()-->RDCost_for_4x4IntraBlocks()
                        -->*nonzero = pDCT_4x4 (currMB, PLANE_Y, block_x, block_y, &dummy, 1)-->orig_img[i] = iClip1( max_imgpel_value, rshift_rnd_sf(m7[i], DQ_BITS) + pred_img[i])

        在insert_picture_in_dpb(dpb.fs[dpb.used_size],p)中,将dpb.fs指向enc_frame_picture[0]
                main()-->encode_one_frame()-->store_picture_in_dpb(enc_frame_picture[0])-->insert_picture_in_dpb(dpb.fs[dpb.used_size],p)

        打开文件:p_dec=open(params->ReconFile, OPENFLAGS_WRITE, OPEN_PERMISSIONS)
                main()-->Configure()-->PatchInp()
        关闭文件:close(p_dec)
                main()

 

test.264通过什么变量写入?
        frame_pic[img->rd_pass]:
                初始化:
                        定义在global.h的全局变量 Picture **frame_pic;
                        将frame_pic[0]赋给frame:frame_picture (frame_pic[0], 0)
                                main()-->encode_one_frame()-->frame_picture()
                        将frame/frame_pic[0]赋给pic:code_a_picture(frame);
                                main()-->encode_one_frame()-->frame_picture()--code_a_picture(frame)
                                在函数encode_one_slice()中pic参数只被赋给img->currentPicture: img->currentPicture = pic
                        将pic/frame/frame_pic[0]赋给img->currentPicture:img->currentPicture = pic
                                img是全局变量:ImageParameters images, *img (lencod.c) = &images; / extern ImageParameters *img; (global.h)
                        将img->currentPicture赋给currPic: Picture *currPic = img->currentPicture;
                        在init_slice()中分配内存:currPic->slices[currPic->no_slices-1] = malloc_slice();
                        即: frame_pic[0]->slices[currPic->no_slices-1] = malloc_slice() / img->currentPicture->slices[currPic->no_slices-1] = malloc_slice()
                                main()-->encode_one_frame()-->frame_picture()-->code_a_picture(frame)-->encode_one_slice()-->init_slice()

                写内容:
                        currStream->streamBuffer[currStream->byte_pos++] = currStream->byte_buf
                                encode_one_macroblock_high()-->submacroblock_mode_decision()-->RDCost_for_8x8blocks()-->writeCoeff8x8()-->writeCoeff4x4_CAVLC()
                                -->writeSyntaxElement_NumCoeffTrailingOnes()-->writeUVLC2buffer()
                                -->writeSyntaxElement_Level_VLC1()/writeSyntaxElement_Level_VLCN()-->writeUVLC2buffer()
                                -->writeSyntaxElement_TotalZeros()-->writeUVLC2buffer()
                                encode_one_macroblock_high()-->compute_mode_RD_cost()/共(max_index = 9)次-->RDCost_for_macroblocks()-->writeMBLayer()-->writeCoeff16x16()-->writeCoeff8x8()-->writeCoeff4x4_CAVLC()

                写到test.264:
                        currSlice = pic->slices[slice] 这里currSlice通过行参pic指向writeout_picture (frame_pic[img->rd_pass])实参frame_pic[img->rd_pass]: currSlice = pic->slices[slice] (=frame_pic[img->rd_pass]->slices[slice])
                        currStream = (currSlice->partArr[partition]).bitstream (=(frame_pic[img->rd_pass]->slices[slice]->partArr[partition]).bitstream)
                                main()-->encode_one_frame()-->writeout_picture()
                        memcpy (&nalu->buf[1], currStream->streamBuffer, nalu->len-1); 然后:WriteNALU (nalu)
                                main()-->encode_one_frame()-->writeout_picture()-->writeUnit()
                打开文件:f = fopen (Filename, "wb")
                        main()-->start_sequence()-->OpenAnnexbFile()
                关闭文件:fclose (f)
                        main()-->terminate_sequence()-->CloseAnnexbFile()
 

在哪里估计mv?
1)
mv[0] = offset_x + spiral_search_x[best_pos];
mv[1] = offset_y + spiral_search_y[best_pos];
        encode_one_macroblock_low()-->PartitionMotionSearch()
        encode_one_macroblock_low()-->submacroblock_mode_decision()-->PartitionMotionSearch()
        -->BlockMotionSearch()-->IntPelME()<-->FastFullPelBlockMotionSearch()
2)
然后,skip模式可能重写mv:
mv[0] = img->all_mv [0][0][0][0][0][0];
mv[1] = img->all_mv [0][0][0][0][0][1];
        encode_one_macroblock_low()-->PartitionMotionSearch()-->BlockMotionSearch()
        encode_one_macroblock_low()-->submacroblock_mode_decision()-->PartitionMotionSearch()-->BlockMotionSearch()
3)
关于mv的内容: short*** all_mv = &img->all_mv[list][ref][blocktype][block_y]; //!< block type (1-16x16 ... 7-4x4)
all_mv[0][i][0] = mv[0];
all_mv[0][i][1] = mv[1];
        encode_one_macroblock_low()-->PartitionMotionSearch()-->BlockMotionSearch()
        encode_one_macroblock_low()-->submacroblock_mode_decision()-->PartitionMotionSearch()-->BlockMotionSearch()

PSliceSkip设置:enc_mb->valid[0] = (!intra && params->InterSearch[bslice][0])
encode_one_macroblock_low()-->init_enc_mb_params()


        img->all_mv
                分配内存:
int get_mem_mv (short ******* mv)
{
// LIST, reference, block_type, block_y, block_x, component
get_mem6Dshort(mv, 2, img->max_num_references, 9, 4, 4, 2);

return 2 * img->max_num_references * 9 * 4 * 4 * 2 * sizeof(short);
}
                        get_mem_mv (&(img->pred_mv));
                        get_mem_mv (&(img->all_mv));
                                main()->init_img()


        enc_picture->mv
                初始化
                写内容:通过img->all_mv写入enc_picture->mv
                        memcpy(enc_picture->mv [LIST_0][block_y][block_x + i], img->all_mv[LIST_0][best_l0_ref][mode][j][i], 2 * sizeof(short))
                                encode_one_macroblock_low()-->assign_enc_picture_params()
                        enc_picture->mv[LIST_0][by][bx][0] = all_mv [LIST_0][ ref][mode8][j][i][0];
                        enc_picture->mv[LIST_0][by][bx][1] = all_mv [LIST_0][ ref][mode8][j][i][1];
                                encode_one_macroblock_low()-->SetMotionVectorsMB (currMB, bslice)

        enc_frame_picture[0]



什么时候写出mv?

    在encode_one_frame()之后


mv在什么时候变化?
        init_frame ()
                main()-->encode_one_frame()
 

什么时候写出residue?

//===== S E T F I N A L M A C R O B L O C K P A R A M E T E R S ======
        通过(*curr_mpr)[16] = img->mpr[0]
                memcpy(&(curr_mpr[j][block_x]), l0pred, block_size_x * sizeof(imgpel));

        通过m7 = &img_m7[j][mb_x]:*m7++ = *imgOrg++ - *imgPred++;
                encode_one_macroblock_low()-->LumaResidualCoding()-->LumaResidualCoding8x8()-->ComputeResidue()/Line863

        //===== DCT, Quantization, inverse Quantization, IDCT, Reconstruction =====
        通过img->m7:
                (*curr_res)[MB_BLOCK_SIZE] = img->m7[pl];
                m7 = &img_m7[j][mb_x];
                SampleReconstruct (imgpel **curImg, imgpel mpr[16][16], int img_m7[16][16], int mb_y, int mb_x, int opix_y, int opix_x, int width, int height, int max_imgpel_value, int dq_bits)
                *imgOrg++ = iClip1( max_imgpel_value, rshift_rnd_sf(*m7++, dq_bits) + *imgPred++);
                        encode_one_macroblock_low()-->LumaResidualCoding()/(Line617)-->LumaResidualCoding8x8()-->pDCT_4x4()/(Line901)<-->dct_4x4()-->SampleReconstruct()
                        encode_one_macroblock_low()-->ChromaResidualCoding()/(Line645)
/*!
        计算宏块的残差m7:
                *m7++ = *imgOrg++ - *imgPred++;
                        encode_one_macroblock_low()-->submacroblock_mode_decision()-->LumaResidualCoding8x8()-->ComputeResidue()

                m7_line = &m7[j][block_x];
                *m7_line++ = (int) (*cur_line++ - *prd_line++)
                        encode_one_macroblock_low()-->Mode_Decision_for_Intra4x4Macroblock()-->Mode_Decision_for_8x8IntraBlocks()-->Mode_Decision_for_4x4IntraBlocks_JM_Low()-->generate_pred_error()
                img->m7[uv + 1][j][i] = imgUV_org[uv][img->opix_c_y+j][img->opix_c_x+i] - curr_mpr[j][i]; (在global.h中定义imgpel ***imgUV_org;)
                        encode_one_macroblock_low()-->ChromaResidualCoding()/line645
*/
几种intrapred:
encode_one_macroblock_low()-->Mode_Decision_for_Intra4x4Macroblock()-->Mode_Decision_for_8x8IntraBlocks()-->Mode_Decision_for_4x4IntraBlocks_JM_Low()-->intrapred_4x4()

encode_one_macroblock_low()-->Mode_Decision_for_new_Intra8x8Macroblock()-->(point functions below)-->Mode_Decision_for_new_8x8IntraBlocks_JM_High()/Mode_Decision_for_new_8x8IntraBlocks_JM_Low()-->intrapred_8x8()
encode_one_macroblock_high()/encode_one_macroblock_highfast()/encode_one_macroblock_highloss()-->compute_mode_RD_cost()-->RDCost_for_macroblocks()-->Mode_Decision_for_new_Intra8x8Macroblock()-->(above)-->Mode_Decision_for_new_8x8IntraBlocks_JM_High()/Mode_Decision_for_new_8x8IntraBlocks_JM_Low()-->intrapred_8x8()
        Mode_Decision_for_new_8x8IntraBlocks = Mode_Decision_for_new_8x8IntraBlocks_JM_Low;
        Mode_Decision_for_new_8x8IntraBlocks = Mode_Decision_for_new_8x8IntraBlocks_JM_High;

encode_one_macroblock_low()-->intrapred_16x16()
encode_one_macroblock_high()/encode_one_macroblock_highfast()/encode_one_macroblock_highloss()-->compute_mode_RD_cost()-->RDCost_for_macroblocks()-->Intra16x16_Mode_Decision()-->intrapred_16x16()

??为什么不能用img->m7来保存residue(就算没有post-scaling, the last step(8-356) of 8.5.10)?
因为在LumaResidualCoding8x8()中pDCT_4x4执行后,还有一行修改了enc_picture->imgY值,所以m7并不是residue:
memcpy(&enc_picture->imgY[img->pix_y + j][img->pix_x + mb_x], &img->mpr[0][j][mb_x], 2 * BLOCK_SIZE * sizeof(imgpel));
        encode_one_macroblock_low()-->submacroblock_mode_decision()-->LumaResidualCoding8x8()
 

-------------------------decoder-----------------------------------------

总是会执行exit_picture()中的DeblockPicture( img, dec_picture );
不是,在函数内部判断

如何在处理完一帧后,进行error concealment?
在init_picture()中:
    if (dec_picture)
    {
        // this may only happen on slice loss
        exit_picture();
    }
        read_new_slice()-->init_picture()

应该放到decode_one_frame()中的最后!!!但是因为while ((currSlice->next_header != EOS && currSlice->next_header != SOP))一直为真,所以并没有被执行到。而是从if (current_header == EOS)内退出了

My understanding:

对JM的建议:
1)应该增加一个标志位在MB中标明哪几个8x8的subMB的coeff被置成零!因为几个地方置coeff,而img->m7没有相应被修改,导致很难得到residue data。
有两个地方用到LumaResidualCoding8x8(),这里第二次需要重新reset_residuex_zero()
        encode_one_macroblock_low()-->submacroblock_mode_decision()-->LumaResidualCoding8x8()
        encode_one_macroblock_low()-->LumaResidualCoding()-->LumaResidualCoding8x8()


有两个地方用mpr的值重设imgY:
memcpy(&enc_picture->imgY[img->pix_y+j][img->pix_x], tr8x8.mpr8x8[j], MB_BLOCK_SIZE * sizeof(imgpel));
或者:memcpy (&enc_picture->imgY[img->pix_y+j][img->pix_x],tr4x4.mpr8x8[j], MB_BLOCK_SIZE * sizeof(imgpel));
        encode_one_macroblock_low()-->SetCoeffAndReconstruction8x8()
memcpy(&enc_picture->imgY[img->pix_y+j][img->pix_x], img->mpr[0][j], MB_BLOCK_SIZE * sizeof (imgpel));
        encode_one_macroblock_low()-->LumaResidualCoding()

 

-------------------------decoder-----------------------------------------

?当partitioning C的RTP包不是在partitioning A的RTP包之后到达时,解码器会有问题?(在NALU_TYPE_DPA内部执行read_next_nalu(nalu))

对JM的建议:
1)MVPERMB_THR的取值过小(JM14.0中用8,在erc_api.h中)
2)erc_mvperMB的计算有问题:
不能用erc_mvperMB /= dec_picture->PicSizeInMbs;
因为在decode_one_slice()-->ercWriteMBMODEandMV()中累加erc_mvperMB,而丢失partitionA包时不执行decode_one_slice()
erc_mvperMB += iabs(pRegion->mv[0]) + iabs(pRegion->mv[1]);
        read_new_slice()-->init_picture()-->exit_picture()

Useful Links:

0) for beginner:
http://www.h263l.com/h264/h264_overview.pdf

1) JM14.0 source code:
http://iphome.hhi.de/suehring/tml/download/

2) H.264 standard:
http://www.itu.int/rec/T-REC-H.264

3) H.264 Wiki:
http://en.wikipedia.org/wiki/H.264

4) JM online document:
http://iphome.hhi.de/suehring/tml/doc/lenc/html/index.html
http://iphome.hhi.de/suehring/tml/doc/ldec/html/index.html

5) JM manual:
http://iphome.hhi.de/suehring/tml/JM%20Reference%20Software%20Manual%20(JVT-X072).pdf

6) Overview of the H.264/AVC Video Coding Standard
http://ieeexplore.ieee.org/iel5/76/27384/01218189.pdf

7) H264Visa.exe
8) YUVviewer.exe

9) RTP Payload Format for H.264 Video
http://tools.ietf.org/html/rfc3984

10) x264
http://blog.csdn.net/sunshine1314/archive/2005/05/20/377158.aspx
http://lspbeyond.go1.icpcn.com/x264/index.htm

11) white paper
http://www.vcodex.com/h264.html(in sections)
http://bbs.chinavideo.org/viewthread.php?tid=3336&extra=page%3D1(whole)

 

back to top

 


FFMPEG:

|| Reading Notes || Actions || Q&A || My understanding || Useful Links ||

Reading Notes:

How to support camera in windows for FFMPEG?

ffmpeg -r 15 -f vfwcap -i 0 output.mpg

    http://ffmpeg.arrozcru.org/forum/viewtopic.php?f=8&t=763

 

Actions:

Q&A:

My understanding: