1. Don't use the recycling bin. This sort of breaks my workflow, so I prefer to: 2. Use nautilus.
You'll have to install gtk packages, but that's a small price to pay to have KDE not crash every couple of minutes.
1. Don't use the recycling bin. This sort of breaks my workflow, so I prefer to: 2. Use nautilus.
You'll have to install gtk packages, but that's a small price to pay to have KDE not crash every couple of minutes.
Luckily, when you are using the 's'earch command you can pick a different separator. Instead of typing "%s/\/foo\/bar\/baz\//foo\/bar\//g", you can simply type "%s#/foo/bar/baz/#foo/bar/#g". Vim will automagically detect you want to use '#' as a delimiter, and you'll end up with a much more readable pattern.
Extra tip: this also works in sed
Let's build a simple example, similar to what we used last time: an object that will determine the range of an integer and then invoke a callback with the closest range. Something like this could be used, for example, to allocate a buffer.
void boring(int x, func f) {
if (x < 2) {
f(2);
} else if (x < 4) {
f(4);
} else if (x < 8) {
f(8);
} else if (x < 16) {
// You get the idea...
}
}
Can we build a prettier template version of this code, without any overhead? Let's try:
typedef void (*func)(int);
template <int My_Size>
struct Foo {
void bar(size_t size, func callback) {
if (size > My_Size) {
callback(My_Size);
} else {
next_foo.bar(size, callback);
}
}
Foo<My_Size/2> next_foo;
};
// Stop condition
template<> struct Foo<0> {
void bar(size_t, func) { }
};
void wrapper(int x, func f) {
Foo<512> jump_table;
jump_table.bar(x, f);
}
And now, let's compile like as "g++ -fverbose-asm -S -O0 -c foo.cpp -o /dev/stdout | c++filt". You'll see something like this:
wrapper(int, void (*)(int)):
call Foo<512>::bar(unsigned long, void (*)(int))
Foo<512>::bar(unsigned long, void (*)(int)):
cmpq $512, %rsi #, size
jbe .L4
call *%rdx # callback
jmp .L3
.L4:
call Foo<256>::bar(unsigned long, void (*)(int)) #
.L3:
leave
Foo<256>::bar(unsigned long, void (*)(int)):
cmpq $256, %rsi #, size
jbe .L4
call *%rdx # callback
jmp .L3
.L4:
call Foo<128>::bar(unsigned long, void (*)(int)) #
.L3:
leave
# You get the idea, right?
Foo<0>::bar(unsigned long, void (*)(int)):
# Stop condition, do nothing
That doesn't look too good, does it? We don't need to worry: we already learned that gcc needs help from the optimizer to handle template expansion and non static function calls. Let's move to O1:
rapper(int, void (*)(int)):
.LFB14:
cmpq $512, %rdi #, D.2974
jbe .L2 #,
movl $512, %edi #,
call *%rsi # f
jmp .L1 #
.L2:
cmpq $256, %rdi #, D.2974
jbe .L4 #,
movl $256, %edi #,
call *%rsi # f
jmp .L1 #
# Again, it should be clear what's going on...
.L11:
cmpq $1, %rdi #, D.2974
.p2align 4,,2
jbe .L1 #,
movl $1, %edi #,
.p2align 4,,2
call *%rsi # f
.L1:
It's better than last time, but it doesn't look great either: gcc managed to inline all calls, but it stopped there. Let's move to O2 and see what happens:
wrapper(int, void (*)(int)):
movslq %edi, %rdi # x, D.2987
cmpq $512, %rdi #, D.2987
ja .L13 #,
cmpq $256, %rdi #, D.2987
ja .L14 #,
[ .... ]
cmpq $2, %rdi #, D.2987
ja .L21 #,
.L13:
movl $512, %edi #,
jmp *%rsi # f
.L14:
movl $256, %edi #,
jmp *%rsi # f
[ .... ]
.L21:
movl $2, %edi #,
jmp *%rsi # f
.L1:
rep
ret
.p2align 4,,10
.p2align 3
Now, that looks much better. And we can now see that gcc generates the same code at -O2 for both versions of our code.
(*) Just for the sake of completion:
for fname in $(ls | grep foo); do echo $fname; done
You can save some typing by using bash-globbing:
for fname in *foo*; do echo $fname; done
Not only the script should be cleaner and faster, bash will take care of properly expanding the file names and you won't have to worry about things like filenames with spaces. This should also be portable to other shells too.
Want to know more about bash globbibg? Check out http://www.linuxjournal.com/content/bash-extended-globbing
> set print repeats 0
> set print elements 0
Analyzing the assembly output for template devices can be a bit discouragging at times, specially when we spend hours trying to tune a mean looking template class only to find out the compiler is not able to reduce it's value like we expected. But hold on, before throwing all your templates away you might want to figure out why they are not optimized.
Let's start with a simple example: a template device to return the next power of 2:
template <int n, long curr_pow, bool stop>
struct Impl_Next_POW2 {
static const bool is_smaller = n < curr_pow;
static const long next_pow = _Next_POW2<n, curr_pow*2, is_smaller>::pow;
static const long pow = is_smaller? curr_pow : next_pow;
};
template <int n, long curr_pow>
struct Impl_Next_POW2<n, curr_pow, true> {
// This specializtion is important to stop the expansion
static const long pow = curr_pow;
};
template <int n>
struct Next_POW2 {
// Just a wrapper for _Next_POW2, to hide away some
// implementation details
static const long pow = _Next_POW2<n, 1, false>::pow;
};
Gcc can easily optimize that away, if you compile with "g++ foo.cpp -c -S -o /dev/stdout" you'll just see the whole thing is replaced by a compile time constant. Let's make gcc's life a bit more complicated now:
template <int n, long curr_pow, bool stop>
struct Impl_Next_POW2 {
static long get_pow() {
static const bool is_smaller = n < curr_pow;
return is_smaller?
curr_pow :
_Next_POW2<n, curr_pow*2, is_smaller>::get_pow();
}
};
template <int n, long curr_pow>
struct Impl_Next_POW2<n, curr_pow, true> {
static long get_pow() {
return curr_pow;
}
};
template <int n>
struct Next_POW2 {
static long get_pow() {
return _Next_POW2<n, 1, false>::get_pow();
}
};
Same code but instead of using plain static values we wrap everything in a method. Compile with "g++ foo.cpp -c -S -fverbose-asm -o /dev/stdout | c++filt" and you'll see something like this now:
main:
call Next_POW2<17>::get_pow()
Next_POW2<17>::get_pow():
call _Next_POW2<17, 1l, false>::get_pow()
_Next_POW2<17, 1l, false>::get_pow():
call _Next_POW2<17, 2l, false>::get_pow()
_Next_POW2<17, 2l, false>::get_pow():
call _Next_POW2<17, 4l, false>::get_pow()
_Next_POW2<17, 4l, false>::get_pow():
call _Next_POW2<17, 8l, false>::get_pow()
_Next_POW2<17, 8l, false>::get_pow():
call _Next_POW2<17, 16l, false>::get_pow()
_Next_POW2<17, 16l, false>::get_pow():
call _Next_POW2<17, 32l, false>::get_pow()
_Next_POW2<17, 32l, false>::get_pow():
movl $32, %eax #, D.2171
What went wrong? It's very clear for us the whole thing is just a chain of calls which could be replaced by the last one, however that information is now only available if you "inspect" the body of each function, and this is something the template instanciator (at least in gcc) can't do. Luckily you just need to enable optimizations, -O1 is enough, to have gcc output the reduced version again.
Keep it in mind for the next time you're optimizing your code with template metaprogramming: some times the template expander needs some help from the optimizer too.
#!/bin/bash
foobar() {
echo "See ya!"
}
trap "foobar" EXIT
It doesn't mater how you end this script, "foobar" will always be executed. Want to read more about bash traps? Check http://linuxcommand.org/wss0160.php
void do_something(const int&);
#include <vector>
void foo() {
std::vector<int> v = {1,2,3,4,5};
const int &num = v.at(1);
v.push_back(42);
do_something(num);
}
Doesn't seem quite right, does it? push_back will most likely trigger a resize for the vector, and that will invalidate references to elements in the vector. num will end up pointing anywhere and so using it to call do_something is not valid C++. Or is it? What happens if we reserve some space for v?
void do_something(const int&);
#include <vector>
void foo() {
std::vector<int> v = {1,2,3,4,5};
v.reserve(40);
const int &num = v.at(1);
v.push_back(6);
do_something(num);
}
It again might seem wrong, but this in fact is valid C++ code. Common sense might tell us that a call to push_back automatically invalidates references to elements in the vector, and it only works because most implementations will do the reasonable thing (ie not to invalidate references unless they must). Turns out the standard makes a special prevision for this case in section 23.3.6.5: a resize for a vector is guaranteed to be triggerd if, and only if, the capacity of the vector is not enough, and references to elements in the vector are guaranteed to be valid unless resize is triggered.
A bit of language laweyering shows that what seems like an error is in fact allowed by the standard, but even if this is valid C++ code you should always keep in mind that assuming that the capacity of a vector will be enough is a VERY big assumption, it's very easy to break and you won't get any warning when it happens (maybe a core dump, if you're lucky).
Until we get to C++14 we don't have a nice alternative, but we have an ugly hack we can use: instead of writing 1000000 write "1 ## 000 ## 000".
It works, '##' is the preprocessor's token pasting operator, and it will paste two tokens together. Looks ugly, breaks the GUI highlighting, but at least you can count how many zeros you've got.
Nitpicker's corner: multiplying by 10 is easier, but there is no job-safety involved in that.
Nitpicker's corner II: The evaluation order of a chain of '##' is not defined, but I don't expect this to cause any problems; any order of evaluation should result in the same result for this case.
Say, for example, you like to encrypt your text. Not always, but every once in a while. Enough to make a shortcut for it but not enough to remember what the shortcut is. You can try to grep your ~/.vimrc. You might find something like:
" Encrypt my stuff
map <leader>e ggg?G<CR>
(Yes, that command will actually encrypt your text in Vim. Try it!)
Wouldn't it be nice if you had a simpler way, though?
Turns out you can add your "encrypt" command to your gui. Then "menu" commands work just like the "map" family, but they create a GUI menu instead. Change your vimrc to something like this:
" Encrypt my stuff
map <leader>e ggg?G<CR>
menu Project.Encrypt ggg?G<CR>
Now if you reload your vimrc you'll find a new GUI menu created, from which you can easily encrypt your text. Decrypting is left as an exercise to the reader.
Extra tip: Want to try to learn the actual shortcut, like a real vim'er? Then try this:
menu Project.Encrypt<TAB>ggg?G ggg?G<CR>
Everything after the TAB will be right-aligned: you can use that space to annotate the key-combo you should use next time.
As usual, for more info check :help menu
const int x = 42;
void f() {
int x[x];
x[24] = 0;
}
Unfortunately, it is. According to 3.3.2 in the standard, a definition is available in a child scope up to the point where it's shadowed, and that means "x" will first be interpreted as the global const int, being an index for a vector named "x". Any new references to "x" will point to the new declaration.
Fun stuff, right?
One way would be to set up a watch expression. If you can't setup a watch expression, say, because you're using an iterator and it'd be hard to set one, you can also tell gdb to setup a breakpoint, and then ignore it N times.
Let's see how this works with this example:
#include <vector>
int main() {
std::vector<int> v = {1,2,3,4,5,0,7,8,9};
int x = 42;
for (auto i = v.begin(); i != v.end(); ++i) {
x = x / *i;
}
return 0;
}
After compiling we run it to see it crash; let's start it on gdb, then set a brakepoint on the line where it crashes.
(gdb) break foo.cpp:8
Breakpoint 1 at 0x4007bc: file foo.cpp, line 8.
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x00000000004007bc in main() at foo.cpp:8
Typing "info breakpoints" will tell you the breakpoint number; then we can tell gdb to ignore this breakpoint forever (where forever is a very large number, so the program will run until it crashes):
(gdb) ignore 1 99999
Will ignore next 99999 crossings of breakpoint 1.
(gdb) run
Starting program: /home/nico/src/a.out
Program received signal SIGFPE, Arithmetic exception.
0x00000000004007d5 in main () at foo.cpp:8
8 x = x / *i;
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x00000000004007bc in main() at foo.cpp:8
breakpoint already hit 6 times
ignore next 99993 hits
(gdb)
By doing this now we know the program crashes the sixth time it goes through that breakpoint. Now we can actually debug the problem:
(gdb) ignore 1 5
Will ignore next 5 crossings of breakpoint 1.
(gdb) run
Starting program: /home/nico/src/a.out
Breakpoint 1, main () at foo.cpp:8
8 x = x / *i;
(gdb) p *i
$1 = (int &) @0x603024: 0
This time gdb will break exactly on the spot we want.
Take a look at this code: what does it do?
struct X {
X() { cout << "X"; }
~X() { cout << "~X"; }
};
void foo() {
X x;
}
It's not hard to see this code will print "X", then "~X" immediately after it: X() is created as a temporary variable which gets constructed and then immediately destructed. Any side effects this object may have should happen in the constructor or the destructor.
Now that we know a bit more about the lifetime of temp objects, is this valid C++?
struct X {
int y;
X(int y) : y(y) {}
};
int foo() {
const X &ref = X(42);
return ref.y;
}
It looks a bit strange: ref is a reference to a temporary object. Temporary objects get destroyed as soon as they are created, so ref.y should be an undefined data access. Right? Not quite, the C++ standard has a special consideration for const references using a temporary object: according to 12.2.3 this is a valid read, as long as ref is a "const X&". Even more interesting, in this case the lifetime of the temporary object "X(42)" gets extended until ref goes out of scope: only when the reference is gone the destructor for X will be run!
Let's analyze this seemingly simple code sample:
struct X {
X();
};
void foo() {
static X a;
}
X b;
void bar() {
foo();
X c;
}
Do you know what the order of initialization will be for a, b and c? b is rather easy: it's a plain global variable and it should be initialized first of all, even before main runs. c is also easy, it will be initialized only when the execution reaches the line where it is defined. How about a?
a is static, so just like b it should be initialized only once. Unlike b, though, it belongs to foo's scope, and it will only be initialized the first time foo is executed. Let's see how that happens in gcc by taking an even simpler example:
struct X {
X() throw();
};
void foo() throw() {
static X x;
}
Note: the throw()'s are in there only to tell the compiler we don't want any kind of exception handling code, that will make the assembly inspection a bit easier. Let's compile, disassemble and c++filt this. You should see something very interesting in the first few lines:
.file "foo.cpp"
.local guard variable for foo()::x
.comm guard variable for foo()::x,8,8
.text
# Skipping the actual foo definition, we'll see that later
.LFE0:
.size foo(), .-foo()
.local foo()::x
.comm foo()::x,1,1
Inside the definition for foo gcc reserved some space for our static variable; interestingly, it also reserved 8 bytes for something called "Guard variable for foo()::x" (when demangled, of course). This means that there is a flag to determine whether foo()::x was already initialized, or not.
Let's analyze now the assembly for foo() to understand how the guard is used:
foo():
movl guard variable for foo()::x, %eax
movzbl (%rax), %eax
testb %al, %al
jne .L1
movl guard variable for foo()::x, %edi
call __cxa_guard_acquire
testl %eax, %eax
setne %al
testb %al, %al
je .L1
movl foo()::x, %edi
call X::X()
movl guard variable for foo()::x, %edi
call __cxa_guard_release
.L1:
# Rest of the method (empty, in our example)
This is also interesting: initializing a static variable depends on libcpp (which is dependant on the compiler's ABI). We could translate the whole thing to, more or less, the following pseudocode:
void foo() {
static X x;
static guard x_is_initialized;
if ( __cxa_guard_acquire(x_is_initialized) ) {
X::X();
x_is_initialized = true;
__cxa_guard_release(x_is_initialized);
}
}
(Note: exception safety ignored, which of course is not the case for a proper libcpp)
Eventually, __cxa_guard_acquire will check if this object was already initialized or if anyone else is trying to initialize this object, and then it will signal the calling method to run x's constructor if it's safe to do so.
There's another bit of information in here which is not immediately obvious: in case X's constructor fails (ie an exception is thrown within this method), x_is_initialized won't be set to true. Assuming the exception is caught somewhere else, if foo() is called again the initialization for foo()::x will be attempted to run once again.
Add it to your bundles in vim and, for extra magic, just map some key to :AT in your vimrc. I have added this one:
map <F4> :AT<CR>
I don't know how I lived without this for such a long time.
Now that we have a basic gateway we can do crazy stuff, like installing a proxy. You may want to manually configure a proxy for each client, but you can also choose to install a transparent proxy for all your users. This can be done with squid, let's see how.
Start by installing squid on your gateway. You can choose a different machine, but you'll have to do some magic with iptales. It's easier to just use the same machine.
Once squid is installed head to /etc/squid/ to vim squid.conf. Yes, it's very scary to see such a long config file, but it's mostly just comments. Luckily squid has reasonable defaults, so you can just ignore most of this file. Just to test if your squid installation was successful, before changing anything, you can tail -f /var/log/squid/access.log and set your browser's proxy to your gateway's IP, port 3128 (squid's default port). If everything works you should be able to browse and also see the access logs scrolling by.
If you are getting a 'denied' page on every request, you may have to configure squid to allow http access. Search for the 'http_access deny all' and comment it. You may also have to search for the local networks definitions and set it up correctly (something like 'acl localnet src 192.168.0.0/24').
Once you have verified that your proxy is working, you can configure it to run on transparent mode. Search for the http_port directive, and change it to something like 'http_port 8213 transparent' (noticed I changed the default port). It is also a good practice to specify IP and port, so squid can bind only to the local interface (you are probably not interested in serving as a proxy for the outside world, unless you plan to run a reverse proxy).
Changing squid to run on transparent mode is not enough, though. You will also need to tell your router to redirect every incoming packet from port 80 to squid. Assuming your LAN is on the 192.168.10.0/24 address and squid is listenning on port 1234, you can use this magic command to setup your iptables rule:
iptables -t nat -A PREROUTING -s 192.168.10.0/24 -p tcp --dport 80 -j DNAT --to :1234
If this doesn't work for you, or you want a more detailed explanation, you can check my post about this iptables rule.
Everything ready, you should be able to unconfigure the proxy from your browser and start using squid right away, no configuration needed. tail -f /var/log/squid/access.log for hours of (thought-policing) fun.
Now that you have a gateway and a transparent proxy, it's time to install a content filter too. It's not hard, just go to your squid's config file and search for the acl section. Over there, add the following two lines:
acl blocksites url_regex "/home/router/blocked_sites.acl"
http_access deny blocksites
This will include the blocked_sites.acl file and deny access to every URL on it. There are many blacklist services out there, from which you can download a nice filter to suit your needs.
Of course, you probably don't want to restart squid each time a new site is added to your blocklist. For this you can use "squid -k reconfigure" to make squid reload its configuration.
Some random tips for squid:
To decipher weird c declarations go to http://cdecl.org/ and type your type. It works for most cases... good luck trying to figure out templates, though, for template metaprogramming you are on your own.
In any LAN you'll probably want to expose some services to the outer world, be it for a bittorrent connection or because you have internal servers you need to access from outside your internal LAN. To do this, you'll have to tell your router to forward some external port to an internal one, like this:
iptables -t nat -A PREROUTING -i eth0 -p tcp
--dport PORT -j DNAT --to INTERNAL_IP:INTERNAL_PORT
# This rule may not be needed, depending on other chain confings
iptables -A INPUT -i eth0 -p tcp -m state --state NEW
--dport PORT -j DNAT --to INTERNAL_IP:INTERNAL_PORT
This is enough to expose a private server to the world, but it will not be very useful when your dynamic IP changes, so you'll need to set INTERNAL_IP to be a static IP.
Of course, this commands are little less than black magic. iptables are rather complex and quite difficult to master, but as a short description we can say they are a way of applying a set of rules to incoming network packets. In iptables you have different tables of rules (in this case we use -t[able] nat) and specify that we want our rule to be applied in the PREROUTING phase. -i specifies that this rule should be applied only to packets incoming from eth0, and --dport means this rule applies only to packets incoming from a certain port. Of course, if you are going to specify a port then you need to specify the protocol (in this case, tcp).
Now we have replicated in our setup almost all the functionalities a small COTS router has. Next time we'll see how to improve that by adding a proxy.
Luckily you can easily restore your state if you just write all the gdb commands you need into a file, then start gdb with "--command=state.gdb". Magic! All your breakpoints are there.
Alternatively, an even better solution: just don't exit gdb after recompiling, simply "kill" your currently under-debug process (ie type "kill" inside gdb, do not kill gdb itself!) and gdb will be smart enough to reload your binary if it changed.