Discussion:
BPC 4 very slow
(too old to reply)
Gandalf Corvotempesta
2016-01-10 13:12:21 UTC
Permalink
I'm trying to use v4 to backup a couple of test server.
Server 1 has 150GB of datas to backup.

Plain rsync copy everything in about 14-15 hours.
Bacula copy everything in 15 hours and 46 minuts (based on last backup email)
BPC is still running from 48 hours. The whole copy lasted for 25
hours, now is running "fsck #1" from yersterday

Something strange is going on, in host summary page I can see 1 full
backup (#0), filled=yes, level=0, lasted for 25 hours (about 1540.2
minutes), then a "partial" backup (#1), filled=yes, level=1,
duration=137

Is the partial backup a "good one"? Should't it be an incremental ?
Why both are "filled"? If I understood properly, only the last one is
filled.

I've uploaded two image (some info were removed for posting). In
server summary, i'm referring to the first item (the one with fsck
running), the second one is a new server with first full backup
running right now.

Loading Image...

Loading Image...

is everything ok and working as expected ? Because having backups
running more than 38 hours is not normal:
http://pastebin.com/raw/nXuUF9sA

srv1 dump is running from 16/01/09 @ 0:26

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-10 21:12:53 UTC
Permalink
2016-01-10 14:12 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
is everything ok and working as expected ? Because having backups
http://pastebin.com/raw/nXuUF9sA
Now I can see this in log file:

2016-01-08 19:26:48 Created directory /var/backups/backuppc/pc/x/refCnt
2016-01-08 19:26:48 full backup started for directory full
2016-01-09 21:07:03 full backup 0 complete, 4181687 files, 4181687
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-10 00:21:40 Aborting backup up after signal INT
2016-01-10 08:10:48 incr backup started for directory full
2016-01-10 10:27:51 Got fatal error during xfer (rsync_bpc exited with
benign status 24 (6144))
2016-01-10 19:23:47 full backup started for directory full

Why a new full is started again ?
In config I have:

$Conf{FullPeriod} = 27.97;
$Conf{FullKeepCnt} = 1;
$Conf{FullKeepCntMin} = 1;
$Conf{FullAgeMax} = 60;
$Conf{IncrPeriod} = 0.97;
$Conf{IncrKeepCntMin} = 7;
$Conf{IncrAgeMax} = 35;
$Conf{IncrKeepCnt} = 31;
$Conf{FillCycle} = 0;

$Conf{WakeupSchedule} = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23];

$Conf{BlackoutPeriods} = [
{
hourBegin => 7.0,
hourEnd => 23.5,
weekDays => [1, 2, 3, 4, 5, 6, 7],
},
];

It should create 1 full every 27.97 days and never backing up during
the day (blacklist from 07:00 to 23:30 every day)

Why a new full was started at 19:23:47 ? It's wrong twice: 1 for
running in a blacklisted hour and 1 for running a second full
regardless the configured period.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Alexander Moisseev
2016-01-11 05:57:22 UTC
Permalink
Post by Gandalf Corvotempesta
$Conf{BlackoutPeriods} = [
{
hourBegin => 7.0,
hourEnd => 23.5,
weekDays => [1, 2, 3, 4, 5, 6, 7],
},
];
It should create 1 full every 27.97 days and never backing up during
the day (blacklist from 07:00 to 23:30 every day)
Why a new full was started at 19:23:47 ? It's wrong twice: 1 for
running in a blacklisted hour and 1 for running a second full
regardless the configured period.
I believe weekDays should be in range 0...6 (0 is Sunday and 6 is Saturday), i.e.
weekDays => [0, 1, 2, 3, 4, 5, 6],


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 08:27:28 UTC
Permalink
Post by Alexander Moisseev
I believe weekDays should be in range 0...6 (0 is Sunday and 6 is Saturday), i.e.
weekDays => [0, 1, 2, 3, 4, 5, 6],
You are right, my mistake.
But for the rest of my questions ?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Sorin Srbu
2016-01-11 08:33:49 UTC
Permalink
-----Original Message-----
Sent: den 11 januari 2016 09:27
To: General list for user discussion, questions and support
Subject: Re: [BackupPC-users] BPC 4 very slow
Post by Alexander Moisseev
I believe weekDays should be in range 0...6 (0 is Sunday and 6 is Saturday), i.e.
weekDays => [0, 1, 2, 3, 4, 5, 6],
You are right, my mistake.
But for the rest of my questions ?
Please excuse a side-track.

Aren't there usually quite some questions about how to set up the
blackout-periods and related?
There is a lot of confusion on how to set this up.

May I suggest an overhaul in the GUI, and may be the docs description, for
this feature to simplify things?


FWIW, in Sweden Saturdays and Sundays are considered weekend days.
Weeksdays (=workweek) are Mondays through Fridays.
--
//Sorin
Alexander Moisseev
2016-01-11 11:48:02 UTC
Permalink
Post by Sorin Srbu
Aren't there usually quite some questions about how to set up the
blackout-periods and related?
There is a lot of confusion on how to set this up.
May I suggest an overhaul in the GUI, and may be the docs description, for
this feature to simplify things?
The weekDays numbering is documented here: http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#What-to-backup-and-when-to-do-it

You suggestion is reasonable, but any changes in the code or documentation isn't possible at the moment. You can find details in the BackupPC-devel mailing list latest threads.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Sorin Srbu
2016-01-11 12:12:05 UTC
Permalink
-----Original Message-----
Sent: den 11 januari 2016 12:48
Subject: Re: [BackupPC-users] BPC 4 very slow
Post by Sorin Srbu
Aren't there usually quite some questions about how to set up the
blackout-periods and related?
There is a lot of confusion on how to set this up.
May I suggest an overhaul in the GUI, and may be the docs description, for
this feature to simplify things?
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#What-to-
backup-and-when-to-do-it
I know about it, I remember this being a pretty darn high threshold to pass
when I first started with BPC, and I have a feeling I might've irritated the
list pretty good with my questions at the time.
You suggestion is reasonable, but any changes in the code or documentation
isn't possible at the moment. You can find details in the BackupPC-devel
mailing list latest threads.
Ah, thanks.

Another solution ocurred to me; how about a companion app in the form of a
web-gui on the docs site with a helper to set up the blackout periods, you
know - a simple thing with checkboxes ("I want the backup to occur at these
times" etc, click "prepare code" and you'll get a piece of code you can copy
and paste into the config-file directly.

It might be a bit over the top though...
--
//Sorin
Alexander Moisseev
2016-01-11 11:37:20 UTC
Permalink
Post by Gandalf Corvotempesta
But for the rest of my questions ?
Is the partial backup a "good one"? Should't it be an incremental ?
Why both are "filled"? If I understood properly, only the last one is
filled.
2016-01-10 08:10:48 incr backup started for directory full
2016-01-10 10:27:51 Got fatal error during xfer (rsync_bpc exited with
benign status 24 (6144))

The incremental backup was interrupted due to fatal error. So, it is "partial". It means only some files were backed up. http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#Backup-basics

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 13:16:34 UTC
Permalink
Post by Alexander Moisseev
The incremental backup was interrupted due to fatal error. So, it is "partial". It means only some files were backed up. http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#Backup-basics
Status 24 should not considered an error (or at least, a configuration
variable should be added to enable or disable the error 24 detection).
On many server, vanished files are normal (for example, PHP session
file and so on).
Having a "failed job" for an error 24 is not good, because the job was
executed properly, just some file was not copied, as it should be with
volatile file.

By the way, i've added a second server yesterday, and the first full
backup is still running, as per image:
Loading Image...

First line: i don't know what is trying to do. First full was
completed with about 1.400.000 files. Second backup was the same. This
is the third backup, why is trying to parse 3.358.721 files ?
Second line: this server was added yesterday and seems to be stuck in
first full backup. No files are parsed from many many hours (rsync log
on client doesn't log anything from this morning). strace to xfer pid
is showing some files parsed.

Yesterday I had 2 backups for server in first line, 1 full, 1
incremental. Now I have just 1 "active" backup. No more full or
incremental.


Something is not working properly and, after all, is absolutely
nonsense 24/48 hours for each backup. The same server with Bacula was
completed in 16 hours.

How can I troubleshoot this ?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 17:05:54 UTC
Permalink
2016-01-11 14:16 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Something is not working properly and, after all, is absolutely
nonsense 24/48 hours for each backup. The same server with Bacula was
completed in 16 hours.
For example, rsync transfer stars, then it stop for a couple of
seconds then it start again, like transfering a batch of 20-30 file
per time:

2016/01/11 18:02:58 [16560] 2016/01/11 18:02:58: host unknown
(172.17.0.1) send x/img/tmp/tab_mini_AdminStock_1.gif (622 bytes).
Total 43 bytes.
2016/01/11 18:02:58 [16560] 2016/01/11 18:02:58: host unknown
(172.17.0.1) send x/img/tmp/tab_mini_AdminTools_1.gif (351 bytes).
Total 43 bytes.
2016/01/11 18:03:15 [16560] 2016/01/11 18:03:15: host unknown
(172.17.0.1) send x/js/.htaccess (275 bytes). Total 43 bytes.
2016/01/11 18:03:15 [16560] 2016/01/11 18:03:15: host unknown
(172.17.0.1) send x/js/admin-categories-tree.js (9362 bytes). Total 95
bytes.

17 seconds (18:02:58 => 18:03:15) without doing anything. This
happens every 10-15 second and is a waste of time.
Plain rsync doens't have this issue. What is happening in these 17
seconds on server side?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 17:07:35 UTC
Permalink
2016-01-11 18:05 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
17 seconds (18:02:58 => 18:03:15) without doing anything. This
happens every 10-15 second and is a waste of time.
Plain rsync doens't have this issue. What is happening in these 17
seconds on server side?
This is even worse:

2016/01/11 18:02:01 [18031] 2016/01/11 18:02:01: host unknown
(172.17.0.1) send x/y/available_version.txt (0 bytes). Total 39 bytes.
2016/01/11 18:02:01 [18031] 2016/01/11 18:02:01: host unknown
(172.17.0.1) send x/z/available_version.txt (0 bytes). Total 39 bytes.
2016/01/11 18:03:46 [18031] 2016/01/11 18:03:46: host unknown
(172.17.0.1) send x/var/log/php-fpm.log (71637089 bytes). Total 858930
bytes.
2016/01/11 18:03:46 [18031] 2016/01/11 18:03:46: host unknown
(172.17.0.1) send x/clamav/mirrors.dat (208 bytes). Total 43 bytes.

18:02:01 => 18:03:46 without transfering.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-11 18:10:33 UTC
Permalink
On Mon, Jan 11, 2016 at 11:07 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
2016-01-11 18:05 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
17 seconds (18:02:58 => 18:03:15) without doing anything. This
happens every 10-15 second and is a waste of time.
Plain rsync doens't have this issue. What is happening in these 17
seconds on server side?
2016/01/11 18:02:01 [18031] 2016/01/11 18:02:01: host unknown
(172.17.0.1) send x/y/available_version.txt (0 bytes). Total 39 bytes.
Wild guess here, but 'host unknown' usually means something has done a
DNS lookup (or reverse, number to name) that has failed. DNS lookups
can be slow. Maybe sticking the client hosts and IPs in your
/etc/hosts file would help if your reverse DNS doesn't work.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 19:31:58 UTC
Permalink
Post by Les Mikesell
Wild guess here, but 'host unknown' usually means something has done a
DNS lookup (or reverse, number to name) that has failed. DNS lookups
can be slow. Maybe sticking the client hosts and IPs in your
/etc/hosts file would help if your reverse DNS doesn't work.
I don't have reverse set for local machines, and as wrote before, the same
configuration is used with plain rsync without issue at all.

If DNS resolution is slow (but this is not the case, i don't see any
delay), the same
delay should even happen with standard rsync.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-11 20:32:22 UTC
Permalink
On Mon, Jan 11, 2016 at 1:31 PM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Les Mikesell
Wild guess here, but 'host unknown' usually means something has done a
DNS lookup (or reverse, number to name) that has failed. DNS lookups
can be slow. Maybe sticking the client hosts and IPs in your
/etc/hosts file would help if your reverse DNS doesn't work.
I don't have reverse set for local machines, and as wrote before, the same
configuration is used with plain rsync without issue at all.
If DNS resolution is slow (but this is not the case, i don't see any
delay), the same
delay should even happen with standard rsync.
I don't recognize that 'unknown host' log entry. If the transfers
wait for whatever is writing it, it might cause a delay where the time
will depend on the DNS response - if you get an immediate NXDOMAIN
from a local server it should be quick but if you are referred to an
upstream and possibly firewalled server that won't respond you would
have a multiple-second wait for a timeout.

You could test the theory with "nslookup 172.17.0.1" to see how long
the response takes if that is the actual address.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 20:56:17 UTC
Permalink
Post by Les Mikesell
I don't recognize that 'unknown host' log entry. If the transfers
wait for whatever is writing it, it might cause a delay where the time
will depend on the DNS response - if you get an immediate NXDOMAIN
from a local server it should be quick but if you are referred to an
upstream and possibly firewalled server that won't respond you would
have a multiple-second wait for a timeout.
dns resolution is done just on connect, not for each file.
So, the delay would be just for the first connection attempt, after that,
speed is normal as no dns resolution is made.

And, as wrote multiple times, the same server with standard rsync is working
perfect. Only BPC is having trouble.
Post by Les Mikesell
You could test the theory with "nslookup 172.17.0.1" to see how long
the response takes if that is the actual address.
$ time nslookup 172.17.0.1
Server: 127.0.1.1
Address: 127.0.1.1#53

** server can't find 1.0.17.172.in-addr.arpa: NXDOMAIN


real 0m0.108s
user 0m0.004s
sys 0m0.004s

is fast enough for you?

DNS is not an issue here.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-11 22:29:17 UTC
Permalink
On 12/01/16 07:56, Gandalf Corvotempesta wrote:

Hi Gandalf,

Just jumping in here to hopefully provide some guidance. I understand
that it can be extremely frustrating when you are migrating from a
product you have used for years and has worked well enough to some new
product and it isn't working as well as you had expected. The important
points there are that your existing product has probably been configured
and tuned appropriately to work well in your environment, while the new
product hasn't had that opportunity yet, in addition, you know the older
product well, while the new one is a big black box that is frustrating
to work out what is wrong, why, and how to fix it. You sound very
experienced and proficient (based on the types of debug you are doing),
so I'm not belittling you or your knowledge at all, but equally, we are
not your typical level 1 tech support droids either, lets work together.

So, please try to do the following:
1) Define and control your environment
* Define the specs of both your server and client (disks, ram, cpu,
network, raid level, lvm, filesystem, etc)
* Only add one server at a time, when it is working well, add a second,
eventually you will be comfortable with adding a batch of servers, but
start controlled

2) When you come across a problem, try to provide as much detailed
information as possible. This can include unmodified log files
(excluding passwords), or similar. Don't just provide a tiny snippet or
a summary, provide all the detail (eg, not just one or two lines with
the error message, but the 50 lines before and 50 lines after).

3) Focus on one problem at a time
Often solving the first problem will also solve the other 5 unrelated
issues you thought you had. BackupPC is a complex system, and it can
take some time and effort to get it working smoothly.

4) IMHO, get yourself a development/testing/spare server to
install/setup BackupPC. Leave your existing Bacula server in-place and
working (so that you have actual working backups) and take some time to
get BackupPC working.

See more comments below....
Post by Gandalf Corvotempesta
Post by Les Mikesell
I don't recognize that 'unknown host' log entry. If the transfers
wait for whatever is writing it, it might cause a delay where the time
will depend on the DNS response - if you get an immediate NXDOMAIN
from a local server it should be quick but if you are referred to an
upstream and possibly firewalled server that won't respond you would
have a multiple-second wait for a timeout.
dns resolution is done just on connect, not for each file.
So, the delay would be just for the first connection attempt, after that,
speed is normal as no dns resolution is made.
This might be true, but backuppc also needs to translate every log
message it sees from every tool. If it doesn't understand that a
specific message can be safely ignored, then it will treat it as an
error and mark the backup as failed. I don't think a reverse DNS lookup
would cause this message to be logged, at least not on the backuppc side
of things. Where did you collect this log from? How many times is the
error logged? What else does that log include or say?
Post by Gandalf Corvotempesta
And, as wrote multiple times, the same server with standard rsync is working
perfect. Only BPC is having trouble.
Sure, comparing standard rsync and BackupPC using rsync might be useful
up to a certain point, but remember they do very different things,
therefore performance will be different. If you are having a performance
issue, then you will need to work out which component is causing that,
and then decide what, if anything, you can do to try to resolve it.
Performance issues are normally one of these:
1) Disk (not raw throughput, but random I/O). Use iostat while a backup
is running to see what is happening.
2) RAM (a lot of ram can be used as a disk cache, not enough ram will
massively increase your disk I/O even without swapping)
3) CPU (for compression, if it is enabled, and also rsync does lots of
calculations/etc)
4) Bandwidth
5) Latency (both disk and network)

I've tried to order the above list in the common BPC performance
problems. Run through the list, check both BPC server and client for
each thing. When you find the problem, you can discuss it here, there
could be some config option that will help, or it might require
installing more/different hardware (eg more ram).

PS, can you explain the reason you are looking to move away from Bacula?
What issue is it that you are trying to solve?

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-11 23:00:02 UTC
Permalink
Post by Adam Goryachev
1) Define and control your environment
* Define the specs of both your server and client (disks, ram, cpu,
network, raid level, lvm, filesystem, etc)
* Only add one server at a time, when it is working well, add a second,
eventually you will be comfortable with adding a batch of servers, but
start controlled
server: DELL PE2950 with 2GB ram and 1 quad-core CPU, 6 SATA disks in RAID-5
client: Xen DomU, 8GB RAM, SAS disks
network: full gigabit (1 switch between server and client)

iperf show nearly about 1gbit

Initially I've added just 1 server: the biggest one, to test.
After first successful backup (lasted 2 days) i've added a smaller
server to check
if a smaller server would be parsed quickly.

With no apparent reason, BPC started a new backup for the firstly
added server, as incremental.
Then, first full and first incremental are suddenly disappeared and
now another full is running.

In the same time, the first full for the second server is still running.

Actually I have:
server1: type=backup, status = "copy #2 -> #1", count = 353170, start
time = 1/11 22:42
server2: type=full, status = "backup full", count = 1224681, start
time = 1/10 12:16

For server1 (Backup Summary table)

#1, Type=full, Filled=no, Level=0, Start Date=1/10 19:23,
Duration/mins=1372.2, Age/days=1.2, Server Backup
Path=/var/backups/backuppc/pc/x/1
#2, Type=full, Filled=yes, Level=0, Start Date=1/10 19:23,
Duration/mins=1372.2, Age/days=1.2, Server Backup
Path=/var/backups/backuppc/pc/x/2
Post by Adam Goryachev
2) When you come across a problem, try to provide as much detailed
information as possible. This can include unmodified log files
(excluding passwords), or similar. Don't just provide a tiny snippet or
a summary, provide all the detail (eg, not just one or two lines with
the error message, but the 50 lines before and 50 lines after).
I've never posted modified log. I've just removed the server names.
Post by Adam Goryachev
3) Focus on one problem at a time
Often solving the first problem will also solve the other 5 unrelated
issues you thought you had. BackupPC is a complex system, and it can
take some time and effort to get it working smoothly.
I have just 1 problem, the other are related to this one (i think)
Post by Adam Goryachev
4) IMHO, get yourself a development/testing/spare server to
install/setup BackupPC. Leave your existing Bacula server in-place and
working (so that you have actual working backups) and take some time to
get BackupPC working.
Bacula is stil running. Having a test environment is impossible at this time.
Post by Adam Goryachev
This might be true, but backuppc also needs to translate every log
message it sees from every tool. If it doesn't understand that a
specific message can be safely ignored, then it will treat it as an
error and mark the backup as failed. I don't think a reverse DNS lookup
would cause this message to be logged, at least not on the backuppc side
of things. Where did you collect this log from? How many times is the
error logged? What else does that log include or say?
Posted log was from client, not from BPC. It was the rsync log.
Post by Adam Goryachev
1) Disk (not raw throughput, but random I/O). Use iostat while a backup
is running to see what is happening.
Ok. i'll try tomorrow
Post by Adam Goryachev
2) RAM (a lot of ram can be used as a disk cache, not enough ram will
massively increase your disk I/O even without swapping)
I would like to add ram, but backup are still running. I would like to
see if these
running process would terminate in the near future before rebooting.
Post by Adam Goryachev
3) CPU (for compression, if it is enabled, and also rsync does lots of
calculations/etc)
Compression level 3 (the same used in Bacula)
Post by Adam Goryachev
4) Bandwidth
full gigabit, tested many times.
Post by Adam Goryachev
5) Latency (both disk and network)
network latency is good. Disk latency I've never checked.
Post by Adam Goryachev
PS, can you explain the reason you are looking to move away from Bacula?
What issue is it that you are trying to solve?
Too complicated. Many times it hangs with no apparent reason then it start again
after a couple of days. really, is a mess of software. I hate it.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-12 00:08:18 UTC
Permalink
On Mon, Jan 11, 2016 at 5:00 PM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
server: DELL PE2950 with 2GB ram and 1 quad-core CPU, 6 SATA disks in RAID-5
RAID-5 will cost a lot in performance on small writes.
Post by Gandalf Corvotempesta
Initially I've added just 1 server: the biggest one, to test.
After first successful backup (lasted 2 days) i've added a smaller
server to check
if a smaller server would be parsed quickly.
With no apparent reason, BPC started a new backup for the firstly
added server, as incremental.
I think if your full took 2 days and daily backups are scheduled,
starting an incremental would be expected.
Post by Gandalf Corvotempesta
Then, first full and first incremental are suddenly disappeared and
now another full is running.
That seems wrong - (maybe, depending on the counts you are configured
to save). But I thought in your first posting you said you had a
'partial'. Those would be discarded when a better one completes.
And one of your log entries mentioned a fatal error.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-12 00:26:38 UTC
Permalink
Post by Les Mikesell
That seems wrong - (maybe, depending on the counts you are configured
to save). But I thought in your first posting you said you had a
'partial'. Those would be discarded when a better one completes.
And one of your log entries mentioned a fatal error.
This is the BPC log for my first server (the one that actually is in
very very very slow copying phase from #2 to #1)

2016-01-08 19:26:48 Created directory /var/backups/backuppc/pc/x/refCnt
2016-01-08 19:26:48 full backup started for directory full
2016-01-09 21:07:03 full backup 0 complete, 4181687 files, 4181687
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-10 00:21:40 Aborting backup up after signal INT
2016-01-10 08:10:48 incr backup started for directory full
2016-01-10 10:27:51 Got fatal error during xfer (rsync_bpc exited with
benign status 24 (6144))
2016-01-10 19:23:47 full backup started for directory full
2016-01-11 18:16:02 full backup 1 complete, 4090748 files, 4090748
bytes, 24 xferErrs (0 bad files, 0 bad shares, 24 other)

First full dump was completed at 2016-01-09 21:07:03
A new incremental was started at 2016-01-10 08:10:48
This incremental was aborted due to an error 24. This should never
abort a backup, error 24 is benign and could happen in each backup if
spool files and dir are backupped (like php session files)

But, why on 2016-01-10 19:23:47 a new full was started? It should
start a new incremental. By the way, this new full lasted for 23 hours
as you can see.... (bytes counter is wrong, I don't have 4090748 bytes
but almost 150GB)

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-12 00:43:35 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Les Mikesell
That seems wrong - (maybe, depending on the counts you are configured
to save). But I thought in your first posting you said you had a
'partial'. Those would be discarded when a better one completes.
And one of your log entries mentioned a fatal error.
This is the BPC log for my first server (the one that actually is in
very very very slow copying phase from #2 to #1)
2016-01-08 19:26:48 Created directory /var/backups/backuppc/pc/x/refCnt
2016-01-08 19:26:48 full backup started for directory full
2016-01-09 21:07:03 full backup 0 complete, 4181687 files, 4181687
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-10 00:21:40 Aborting backup up after signal INT
2016-01-10 08:10:48 incr backup started for directory full
2016-01-10 10:27:51 Got fatal error during xfer (rsync_bpc exited with
benign status 24 (6144))
2016-01-10 19:23:47 full backup started for directory full
2016-01-11 18:16:02 full backup 1 complete, 4090748 files, 4090748
bytes, 24 xferErrs (0 bad files, 0 bad shares, 24 other)
First full dump was completed at 2016-01-09 21:07:03
A new incremental was started at 2016-01-10 08:10:48
This incremental was aborted due to an error 24. This should never
abort a backup, error 24 is benign and could happen in each backup if
spool files and dir are backupped (like php session files)
But, why on 2016-01-10 19:23:47 a new full was started? It should
start a new incremental. By the way, this new full lasted for 23 hours
as you can see.... (bytes counter is wrong, I don't have 4090748 bytes
but almost 150GB)
I'm confused and concerned about the above log...

2016-01-08 19:26:48 Created directory /var/backups/backuppc/pc/x/refCnt
2016-01-08 19:26:48 full backup started for directory full
2016-01-09 21:07:03 full backup 0 complete, 4181687 files, 4181687
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)

OK, so we created a new directory for the new host, and started the
first backup (should be number 0).
The backup completed with 4181687 files (looks like a bug because bytes
equals # files) however, it says there was 1 xferErrs. So we don't know
if this caused BPC to mark the backup as incomplete or not.

2016-01-10 00:21:40 Aborting backup up after signal INT

This line seems to be out of context. I don't think it is related to the
previous (completed) backup, and it shouldn't be related to the next
backup that hasn't started yet. Did you do something here?

2016-01-10 08:10:48 incr backup started for directory full
2016-01-10 10:27:51 Got fatal error during xfer (rsync_bpc exited with
benign status 24 (6144))

Started a incremental backup, given the time, I'm guessing this was
started manually, certainly the default doesn't start backups with that
sort of offset, especially if it is the only server configured. How did
you start this backup? Maybe from the web interface?
In any case, it has failed. Can you provide the actual backup log so we
can see more details about why it failed?

2016-01-10 19:23:47 full backup started for directory full
2016-01-11 18:16:02 full backup 1 complete, 4090748 files, 4090748
bytes, 24 xferErrs (0 bad files, 0 bad shares, 24 other)

A full can certainly happen after a failed incremental, we don't know
why. Again, the time looks very strange, how was this initiated? Can you
provide copies of your configs? The detailed backup logs?

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-12 00:53:53 UTC
Permalink
On Mon, Jan 11, 2016 at 6:43 PM, Adam Goryachev
Post by Gandalf Corvotempesta
2016-01-10 00:21:40 Aborting backup up after signal INT
This line seems to be out of context. I don't think it is related to the
previous (completed) backup, and it shouldn't be related to the next
backup that hasn't started yet. Did you do something here?
I was going to comment on that, but Adam is in a much better position
to help since he has run v4. Is there any chance you could have
started more than one instance of the backuppc server?
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-12 01:00:34 UTC
Permalink
Post by Les Mikesell
Is there any chance you could have
started more than one instance of the backuppc server?
Absolutely not. Only "rsync_bpc" is always twice for each running
backup (in this case, 1 running backup, so 2 rsync_bpc"

backuppc 27940 0.0 0.2 58776 5260 ? S Jan08 1:20
/usr/bin/perl /usr/local/BackupPC/bin/BackupPC -d
backuppc 32144 0.0 0.5 59380 10960 ? S Jan11 0:00
/usr/bin/perl /usr/local/BackupPC/bin/BackupPC_dump srv1
backuppc 32147 12.4 15.8 383596 326712 ? D< Jan11 23:11
/usr/bin/perl /usr/local/BackupPC/bin/BackupPC_backupDuplicate -h srv1
backuppc 32237 0.1 0.7 60680 14672 ? S 00:46 0:05
/usr/bin/perl /usr/local/BackupPC/bin/BackupPC_dump srv2
backuppc 32241 0.8 8.6 200376 177196 ? S 00:46 0:31
/usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
--bpc-host-name srv2 --bpc-share-name full --bpc-bkup-num 0
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
--bpc-bkup-inode0 1 --bpc-log-level 1 --super --recursive
--protect-args --numeric-ids --perms --owner --group -D --times
--links --hard-links --delete --partial --log-format=log: %o %i %B
%8U,%8G %9l %f%L --stats --checksum
--password-file=/var/backups/backuppc/pc/srv2/.rsyncdpw32237
--exclude=/proc --exclude=/sys --exclude=tmp/ --exclude=/var/cache
--exclude=/var/log/lastlog --exclude=/var/log/rsync*
--exclude=/var/lib/mlocate --exclude=/var/spool --exclude=/media
--exclude=/mnt ***@srv2::full /
backuppc 32242 0.5 5.5 195348 114016 ? D 00:46 0:18
/usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
--bpc-host-name srv2 --bpc-share-name full --bpc-bkup-num 0
--bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
--bpc-bkup-inode0 1 --bpc-log-level 1 --super --recursive
--protect-args --numeric-ids --perms --owner --group -D --times
--links --hard-links --delete --partial --log-format=log: %o %i %B
%8U,%8G %9l %f%L --stats --checksum
--password-file=/var/backups/backuppc/pc/srv2/.rsyncdpw32237
--exclude=/proc --exclude=/sys --exclude=tmp/ --exclude=/var/cache
--exclude=/var/log/lastlog --exclude=/var/log/rsync*
--exclude=/var/lib/mlocate --exclude=/var/spool --exclude=/media
--exclude=/mnt ***@srv2::full /

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-12 00:57:45 UTC
Permalink
Post by Adam Goryachev
OK, so we created a new directory for the new host, and started the
first backup (should be number 0).
Yes, it was #0
Post by Adam Goryachev
The backup completed with 4181687 files (looks like a bug because bytes
equals # files) however, it says there was 1 xferErrs. So we don't know
if this caused BPC to mark the backup as incomplete or not.
Exactly. This server has more or less 150GB of files.
Post by Adam Goryachev
This line seems to be out of context. I don't think it is related to the
previous (completed) backup, and it shouldn't be related to the next
backup that hasn't started yet. Did you do something here?
Probably i've pressed "Stop/dequeue" from the admin panel to skip running
new backups for a while (it was just a test)
Post by Adam Goryachev
Started a incremental backup, given the time, I'm guessing this was
started manually, certainly the default doesn't start backups with that
sort of offset, especially if it is the only server configured. How did
you start this backup? Maybe from the web interface?
Absolutely nothis. BPC started this incremental on their own.
Post by Adam Goryachev
In any case, it has failed. Can you provide the actual backup log so we
can see more details about why it failed?
How can I get? I've posted the only log that I have in control panel.
Do you whant the error log ? It's huge and full of sensitive data
Post by Adam Goryachev
2016-01-10 19:23:47 full backup started for directory full
2016-01-11 18:16:02 full backup 1 complete, 4090748 files, 4090748
bytes, 24 xferErrs (0 bad files, 0 bad shares, 24 other)
A full can certainly happen after a failed incremental, we don't know
why. Again, the time looks very strange, how was this initiated? Can you
provide copies of your configs? The detailed backup logs?
I've never manually initiated a backup. All automatically by BPC.

Full configuration: http://pastebin.com/vU3Na2tP

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-12 01:19:03 UTC
Permalink
Please don't trim so much, it is very useful to still keep the original
log that is being discussed.
Post by Gandalf Corvotempesta
Post by Adam Goryachev
The backup completed with 4181687 files (looks like a bug because bytes
equals # files) however, it says there was 1 xferErrs. So we don't know
if this caused BPC to mark the backup as incomplete or not.
Exactly. This server has more or less 150GB of files.
However, we still don't know about the error.
Post by Gandalf Corvotempesta
Post by Adam Goryachev
2016-01-10 00:21:40 Aborting backup up after signal INT
This line seems to be out of context. I don't think it is related to the
previous (completed) backup, and it shouldn't be related to the next
backup that hasn't started yet. Did you do something here?
Probably i've pressed "Stop/dequeue" from the admin panel to skip running
new backups for a while (it was just a test)
I've pasted the log entry back in, and I don't see that error being
logged from simply pressing stop/dequeue while a backup is not running.
I would expect to see that error if stopping a running backup.

Just as a test, I click stop/dequeue on one of my own BPC v4 servers,
and asked it to prevent any backups for one hour. There was no entry
logged at all for this.

I would strongly suggest that something has happened here which could be
explained by trying lots of random things when trying to get it working.
I'd probably suggest to stop BPC, delete the pc/server1 directory, and
then start BPC, and re-run the first/initial backup. It might be quicker
than the real first backup since a lot of the files will still be in the
pool.
Post by Gandalf Corvotempesta
Post by Adam Goryachev
Started a incremental backup, given the time, I'm guessing this was
started manually, certainly the default doesn't start backups with that
sort of offset, especially if it is the only server configured. How did
you start this backup? Maybe from the web interface?
Absolutely nothis. BPC started this incremental on their own.
OK, well, just looks strange.... though technically, it shouldn't matter
how the backups are started, as long as you don't use the CLI tools that
are not meant to be used directly.
Post by Gandalf Corvotempesta
Post by Adam Goryachev
In any case, it has failed. Can you provide the actual backup log so we
can see more details about why it failed?
How can I get? I've posted the only log that I have in control panel.
Do you whant the error log ? It's huge and full of sensitive data
Yes, the error log or the xferlog. It should only contain
filenames/directory names, hopefully that is not so sensitive data? At
best, it should contain a lot more detail from any errors encountered.
We don't need (or want) the full log, that could be 100's of MB, but a
reasonable snippet to clearly show what was happening before/after the
relevant errors. At least the first 20 lines and last 20 lines are
usually reasonably useful.

I'm referring to the log file for the specific backup.... So click on
the host, you see the table of backups, and underneath is a table of
Xfer Error Summary which provides links to the XferLOG and Errors.
Post by Gandalf Corvotempesta
Post by Adam Goryachev
2016-01-10 19:23:47 full backup started for directory full
2016-01-11 18:16:02 full backup 1 complete, 4090748 files, 4090748
bytes, 24 xferErrs (0 bad files, 0 bad shares, 24 other)
A full can certainly happen after a failed incremental, we don't know
why. Again, the time looks very strange, how was this initiated? Can you
provide copies of your configs? The detailed backup logs?
I've never manually initiated a backup. All automatically by BPC.
Full configuration: http://pastebin.com/vU3Na2tP
No idea.... please post any logs or configs inline in the email. Not
only does that allow people to look at them in the future (ie, the
archives in 2 years time) but also it lets everyone see them without
having to go and refer to another website.

Skipping lines that are blank/commented is usually a good idea. Both the
global config and the host specific config would be useful. Obviously,
obfuscate username/password/etc as needed, but the more you edit the
file the harder it can be to work out what you have obfuscated. eg, it
can be better to change the username jamesw to malcom rather than
changing it to x.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-12 07:53:54 UTC
Permalink
Post by Adam Goryachev
Please don't trim so much, it is very useful to still keep the original
log that is being discussed.
I've not trimmed at all.
Post by Adam Goryachev
I would strongly suggest that something has happened here which could be
explained by trying lots of random things when trying to get it working.
I'd probably suggest to stop BPC, delete the pc/server1 directory, and
then start BPC, and re-run the first/initial backup. It might be quicker
than the real first backup since a lot of the files will still be in the
pool.
Ok, i'll try after adding more ram.
Post by Adam Goryachev
Yes, the error log or the xferlog. It should only contain
filenames/directory names, hopefully that is not so sensitive data? At
best, it should contain a lot more detail from any errors encountered.
We don't need (or want) the full log, that could be 100's of MB, but a
reasonable snippet to clearly show what was happening before/after the
relevant errors. At least the first 20 lines and last 20 lines are
usually reasonably useful.
Skipping lines that are blank/commented is usually a good idea. Both the
global config and the host specific config would be useful. Obviously,
obfuscate username/password/etc as needed, but the more you edit the
file the harder it can be to work out what you have obfuscated. eg, it
can be better to change the username jamesw to malcom rather than
changing it to x.
I've posted the whole config, not trimmed at all.
I don't have host specific config except for a single line with rsync password.
What I've posted is *exactly* what i'm using, no trim or customization was made
except replacing server hostname with 'srv1', 'srv2' and so on and
users with 'x'

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Patrick Begou
2016-01-12 07:29:54 UTC
Permalink
Post by Gandalf Corvotempesta
Exactly. This server has more or less 150GB of files.
I'm using BackupPC 3.3.1 for a while now and backing up a partition of more than
100GB (from a macbook client) was requiering more than 24 hours on a backupPC
server with only 2GB of RAM. Increasing the RAM to 6GB has totally solved the
problem. Now a full backup on 180GB requires a little bit more than 3 hours.

May be you should first increase the RAM of your server.

Patrick
--
===================================================================
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:***@grenoble-inp.fr |
| LEGI | |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX | Fax 04 76 82 52 71 |
===================================================================


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-12 08:00:57 UTC
Permalink
Post by Patrick Begou
I'm using BackupPC 3.3.1 for a while now and backing up a partition of more than
100GB (from a macbook client) was requiering more than 24 hours on a backupPC
server with only 2GB of RAM. Increasing the RAM to 6GB has totally solved the
problem. Now a full backup on 180GB requires a little bit more than 3 hours.
I'm doing this right now.
I'll remove the whole /pc/ directory, add ram and start again.

What do you think, should I also remove the pool directory and start
totally clean ?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Patrick Begou
2016-01-12 08:50:58 UTC
Permalink
I have'nt removed anything after upgrading the RAM.
My BackupPC is a virtual host under proxmox, I've just increased the RAM allowed
from 2GB to 6GB without restarting anything.

Patrick
Post by Gandalf Corvotempesta
Post by Patrick Begou
I'm using BackupPC 3.3.1 for a while now and backing up a partition of more than
100GB (from a macbook client) was requiering more than 24 hours on a backupPC
server with only 2GB of RAM. Increasing the RAM to 6GB has totally solved the
problem. Now a full backup on 180GB requires a little bit more than 3 hours.
I'm doing this right now.
I'll remove the whole /pc/ directory, add ram and start again.
What do you think, should I also remove the pool directory and start
totally clean ?
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
===================================================================
| Equipe M.O.S.T. | |
| Patrick BEGOU | mailto:***@grenoble-inp.fr |
| LEGI | |
| BP 53 X | Tel 04 76 82 51 35 |
| 38041 GRENOBLE CEDEX | Fax 04 76 82 52 71 |
===================================================================


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-12 10:05:28 UTC
Permalink
Post by Patrick Begou
I have'nt removed anything after upgrading the RAM.
My BackupPC is a virtual host under proxmox, I've just increased the RAM allowed
from 2GB to 6GB without restarting anything.
This screams problem alert right here, you are sharing the backuppc
resources with one or more other servers. Again, I would strongly
suggest that you need to look at where your performance bottleneck is to
fix the performance problem you are having. Randomly throwing more
hardware at the problem isn't necessarily the best solution (though a
bit of extra ram is almost definitely a good idea).

Regards,
Adam
Post by Patrick Begou
Patrick
Post by Gandalf Corvotempesta
Post by Patrick Begou
I'm using BackupPC 3.3.1 for a while now and backing up a partition of more than
100GB (from a macbook client) was requiering more than 24 hours on a backupPC
server with only 2GB of RAM. Increasing the RAM to 6GB has totally solved the
problem. Now a full backup on 180GB requires a little bit more than 3 hours.
I'm doing this right now.
I'll remove the whole /pc/ directory, add ram and start again.
What do you think, should I also remove the pool directory and start
totally clean ?
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-12 10:07:25 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Patrick Begou
I'm using BackupPC 3.3.1 for a while now and backing up a partition of more than
100GB (from a macbook client) was requiering more than 24 hours on a backupPC
server with only 2GB of RAM. Increasing the RAM to 6GB has totally solved the
problem. Now a full backup on 180GB requires a little bit more than 3 hours.
I'm doing this right now.
I'll remove the whole /pc/ directory, add ram and start again.
Don't remove the pc directory itself, only the directories under it.
Otherwise, you probably need to manually re-create it, and ensure
permissions and ownership is correct.
Post by Gandalf Corvotempesta
What do you think, should I also remove the pool directory and start
totally clean ?
I would keep the pool in an effort to reduce the time it will take for
the first backup.... I don't think it should be corrupted or causing any
problem.

Regards,
Adam

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 19:39:14 UTC
Permalink
Post by Patrick Begou
I'm using BackupPC 3.3.1 for a while now and backing up a partition of more than
100GB (from a macbook client) was requiering more than 24 hours on a backupPC
server with only 2GB of RAM. Increasing the RAM to 6GB has totally solved the
problem. Now a full backup on 180GB requires a little bit more than 3 hours.
Ok, let's start again from the beginning:

# cat /etc/debian_version
8.2

# grep MemTotal /proc/meminfo
MemTotal: 8190512 kB

# grep 'model name' /proc/cpuinfo | head -n1
model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz

# mount | grep backups
/dev/mapper/vg0-lv_backups on /var/backups type ext3
(rw,nosuid,nodev,noexec,noatime,nodiratime,nobarrier)

Click on "start full backup" in admin page

# cat /var/log/BackupPC/LOG
2016-01-13 20:21:26 Running BackupPC_Admin_SCGI (pid=29415)
2016-01-13 20:22:11 Reading hosts file
2016-01-13 20:22:11 BackupPC started, pid 29463
2016-01-13 20:22:11 Running BackupPC_Admin_SCGI (pid=29464)
2016-01-13 20:22:11 Next wakeup is 2016-01-13 23:00:00
2016-01-13 20:28:58 User backuppc requested backup of srv1 (srv1)
2016-01-13 20:28:58 Started full backup on srv1 (pid=29473, share=full)


Let's wait....

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 20:07:01 UTC
Permalink
2016-01-13 20:39 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
# cat /var/log/BackupPC/LOG
2016-01-13 20:21:26 Running BackupPC_Admin_SCGI (pid=29415)
2016-01-13 20:22:11 Reading hosts file
2016-01-13 20:22:11 BackupPC started, pid 29463
2016-01-13 20:22:11 Running BackupPC_Admin_SCGI (pid=29464)
2016-01-13 20:22:11 Next wakeup is 2016-01-13 23:00:00
2016-01-13 20:28:58 User backuppc requested backup of srv1 (srv1)
2016-01-13 20:28:58 Started full backup on srv1 (pid=29473, share=full)
Same issues. This is "rsync.log" client side:

2016/01/13 20:48:21 [19453] 2016/01/13 20:48:21: host unknown
(172.17.0.1) send
home/user/domains/domain.it/public_html/cgi-bin/.htaccess (17 bytes).
Total 43 bytes.
2016/01/13 20:48:21 [19453] 2016/01/13 20:48:21: host unknown
(172.17.0.1) send
home/user/domains/domain.it/.htpasswd/.protected.list (0 bytes). Total
39 bytes.
2016/01/13 21:03:29 [19453] 2016/01/13 21:03:29: host unknown
(172.17.0.1) send home/user/domains/domain.it/awstats/.htaccess (233
bytes). Total 276 bytes.
2016/01/13 21:03:29 [19453] 2016/01/13 21:03:29: host unknown
(172.17.0.1) send
home/user/domains/domain.it/awstats/awstats.domain.it.1003.alldomains.html
(5934 bytes). Total 5977 bytes.

Totally frozen between 2016/01/13 20:48:21 and 2016/01/13 21:03:29

BPC server side was writing to pool (I've seen many activity by using
"strace" to rsync_bpc)

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 20:52:55 UTC
Permalink
2016-01-13 21:07 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
2016/01/13 20:48:21 [19453] 2016/01/13 20:48:21: host unknown
(172.17.0.1) send
home/user/domains/domain.it/public_html/cgi-bin/.htaccess (17 bytes).
Total 43 bytes.
2016/01/13 20:48:21 [19453] 2016/01/13 20:48:21: host unknown
(172.17.0.1) send
home/user/domains/domain.it/.htpasswd/.protected.list (0 bytes). Total
39 bytes.
2016/01/13 21:03:29 [19453] 2016/01/13 21:03:29: host unknown
(172.17.0.1) send home/user/domains/domain.it/awstats/.htaccess (233
bytes). Total 276 bytes.
2016/01/13 21:03:29 [19453] 2016/01/13 21:03:29: host unknown
(172.17.0.1) send
home/user/domains/domain.it/awstats/awstats.domain.it.1003.alldomains.html
(5934 bytes). Total 5977 bytes.
Totally frozen between 2016/01/13 20:48:21 and 2016/01/13 21:03:29
Raw performance by direct rsync between these two servers:

receiving incremental file list
test.img
1,073,741,824 100% 25.04MB/s 0:00:40 (xfr#1, to-chk=0/1)

sent 77 bytes received 1,073,873,061 bytes 24,686,738.80 bytes/sec
total size is 1,073,741,824 speedup is 1.00


24.5MB/s, not too much, but not too bad. 24 times faster than BPC
(with BPC i got about 1MB/s)



raw performance writing to disk on BPC server (same partitions used by
BPC as storage):

# dd if=/dev/zero of=/var/backups/test.img bs=1M count=10000
^C8815+0 records in
8815+0 records out
9243197440 bytes (9.2 GB) copied, 142.267 s, 65.0 MB/s



So, network is not an issue. "plain" rsync is not an issue, write
performance to disks are not an issue......
The only remaining is BPC.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-13 21:07:45 UTC
Permalink
On Wed, Jan 13, 2016 at 2:52 PM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Totally frozen between 2016/01/13 20:48:21 and 2016/01/13 21:03:29
Did your strace test show a hanging system call on any of the active
processes in this time?
Post by Gandalf Corvotempesta
raw performance writing to disk on BPC server (same partitions used by
# dd if=/dev/zero of=/var/backups/test.img bs=1M count=10000
^C8815+0 records in
8815+0 records out
9243197440 bytes (9.2 GB) copied, 142.267 s, 65.0 MB/s
That's not at all like what rsync would be doing when it merges
changes to a compressed file.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 21:14:51 UTC
Permalink
Post by Les Mikesell
Did your strace test show a hanging system call on any of the active
processes in this time?
Nothing is hanged. When this occurs, no transfer is happening via network
and both "rsync_bpc" processes are parsing tons of these:


FIRST PROCES:
read(5, "82f3016ca8f4b309aa141fb1aee9dfb0"..., 8184) = 4117
select(6, [5], [], NULL, {60, 0}) = 1 (in [5], left {59, 855296})
read(5, "34166a01128e65c0e98ce44442a634ea"..., 8184) = 4093
select(6, [5], [], NULL, {60, 0}) = 1 (in [5], left {59, 988318})

SECOND PROCESS:
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 999934})
read(3, "63db4c25f333ca\0\202\3Vg9\211\247\346\"N\233<\215\320x\4\272"...,
4092) = 2896
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 999675})
read(3, "197addc7b4\0001\2V\v\v\222\2655\242\213\242\310P\222\343\331\211Q\244T\237"...,
1196) = 1196
select(7, NULL, [6], [6], {60, 0}) = 1 (out [6], left {59, 999999})
write(6, "690f3f5dc38571ac1d63db4c25f333ca"..., 4092) = 4092
select(7, NULL, [6], [6], {60, 0}) = 1 (out [6], left {59, 999998})
write(6, "\363a\306v}\376\364@\241\236{:A ", 14) = 14
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 906237})
read(3, "\374\17\0\7", 4) = 4
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 999806})
read(3, "ff5cd88ec882380bd1ade93a3c24\0M\2V"..., 4092) = 2896


What's the meaning of two rsync_bpc processes?
Post by Les Mikesell
That's not at all like what rsync would be doing when it merges
changes to a compressed file.
I know, but having a slow disk would slow down also rsync and bpc.
This test told me that disks are working properly and bottleneck shold
be somewhere else.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 21:57:23 UTC
Permalink
2016-01-13 22:14 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Nothing is hanged. When this occurs, no transfer is happening via network
Seems to be a write delay. rsync doesn't send new files until BPC has
finished writes to disks.

BTW, by changing compressione level from 3 to 1, seems to be a little faster.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-13 23:59:05 UTC
Permalink
On Wed, Jan 13, 2016 at 3:57 PM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
2016-01-13 22:14 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Nothing is hanged. When this occurs, no transfer is happening via network
Seems to be a write delay. rsync doesn't send new files until BPC has
finished writes to disks.
BTW, by changing compressione level from 3 to 1, seems to be a little faster.
If you aren't seeing system calls that haven't completed the process
may be CPU bound in user space - and a speedup from less compression
would make sense in that case. But, that amount of time seems
extreme.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-13 22:29:00 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Les Mikesell
Did your strace test show a hanging system call on any of the active
processes in this time?
Nothing is hanged. When this occurs, no transfer is happening via network
read(5, "82f3016ca8f4b309aa141fb1aee9dfb0"..., 8184) = 4117
select(6, [5], [], NULL, {60, 0}) = 1 (in [5], left {59, 855296})
read(5, "34166a01128e65c0e98ce44442a634ea"..., 8184) = 4093
select(6, [5], [], NULL, {60, 0}) = 1 (in [5], left {59, 988318})
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 999934})
read(3, "63db4c25f333ca\0\202\3Vg9\211\247\346\"N\233<\215\320x\4\272"...,
4092) = 2896
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 999675})
read(3, "197addc7b4\0001\2V\v\v\222\2655\242\213\242\310P\222\343\331\211Q\244T\237"...,
1196) = 1196
select(7, NULL, [6], [6], {60, 0}) = 1 (out [6], left {59, 999999})
write(6, "690f3f5dc38571ac1d63db4c25f333ca"..., 4092) = 4092
select(7, NULL, [6], [6], {60, 0}) = 1 (out [6], left {59, 999998})
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 906237})
read(3, "\374\17\0\7", 4) = 4
select(4, [3], [], NULL, {60, 0}) = 1 (in [3], left {59, 999806})
read(3, "ff5cd88ec882380bd1ade93a3c24\0M\2V"..., 4092) = 2896
What's the meaning of two rsync_bpc processes?
Post by Les Mikesell
That's not at all like what rsync would be doing when it merges
changes to a compressed file.
I know, but having a slow disk would slow down also rsync and bpc.
This test told me that disks are working properly and bottleneck shold
be somewhere else.
Ummm, really? I think you are confused. Depending on where exactly the
above processes are reading or writing to (most likely it isn't network,
which means it is almost certainly your backuppc server disks) will tell
you where the bottleneck is. You have identified that the client is
providing the data to the server quickly enough, but the server is too
slow to process this data (ie, do whatever needs to happen to save it in
the correct place). This is almost certainly one or more of the
following reasons:
1) Slow I/O
2) Not enough RAM leading to not enough cache leading to slow I/O
3) Slow CPU
Post by Gandalf Corvotempesta
receiving incremental file list
test.img
1,073,741,824 100% 25.04MB/s 0:00:40 (xfr#1, to-chk=0/1)
sent 77 bytes received 1,073,873,061 bytes 24,686,738.80 bytes/sec
total size is 1,073,741,824 speedup is 1.00
24.5MB/s, not too much, but not too bad. 24 times faster than BPC
(with BPC i got about 1MB/s)
This is completely rubbish, it isn't a useful comparison of anything. I
am almost certain that your actual client isn't made up of files with a
average size of 1GB. In fact the snippet of the rsyncd log that you
previously provided showed very small files. It still isn't a meaningful
comparison, but at least it is more realistic if you used the actual
files you are trying to backup, even if it is only a subset of them.

BTW, the reason it isn't so relevant is because backuppc does a lot more
work on the server side than plain rsync, the client side performance is
relevant, and could at least show that the client is capable.
Post by Gandalf Corvotempesta
raw performance writing to disk on BPC server (same partitions used by
# dd if=/dev/zero of=/var/backups/test.img bs=1M count=10000
^C8815+0 records in
8815+0 records out
9243197440 bytes (9.2 GB) copied, 142.267 s, 65.0 MB/s
Totally irrelevant. BackupPC is doing lots of small random reads and
writes. However, maybe that is relevant, because 65MB/s on any single
HDD from the past 5 years, let alone a RAID array is abysmal for
streaming writes. Even a single drive should be capable of at least 100MB/s.

Here is the same statistic from one of my BPC v3 servers:
dr:/mnt/imagestore# dd if=/dev/zero of=/var/backups/test.img bs=1M
count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 16.135 s, 650 MB/s

This is a LV sitting on a RAID5 array:
md0 : active raid5 sde1[4] sdc1[3] sdd1[2] sdb1[0]
11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/4] [UUUU]

Which is using these drives:
Model Family: Western Digital Red (AF)
Device Model: WDC WD40EFRX-68WT0N0
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm

I'm sure you said you had 7200rpm disks, so you should get even better
performance for both random r/w as well as streaming writes. Which
brings me back to my earlier concern that you are using a VM for
backuppc, it is sharing it's performance with other things, which works
very poorly when dealing with spinning disks (even a streaming write
like your example is mixed with other random I/O which means it kills
performance).

Please diagnose and resolve the underlying performance issues, then come
back to BPC and see how it performs.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-13 23:06:37 UTC
Permalink
Post by Adam Goryachev
Post by Gandalf Corvotempesta
raw performance writing to disk on BPC server (same partitions used by
# dd if=/dev/zero of=/var/backups/test.img bs=1M count=10000
^C8815+0 records in
8815+0 records out
9243197440 bytes (9.2 GB) copied, 142.267 s, 65.0 MB/s
Totally irrelevant. BackupPC is doing lots of small random reads and
writes. However, maybe that is relevant, because 65MB/s on any single
HDD from the past 5 years, let alone a RAID array is abysmal for
streaming writes. Even a single drive should be capable of at least 100MB/s.
dr:/mnt/imagestore# dd if=/dev/zero of=/var/backups/test.img bs=1M
count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 16.135 s, 650 MB/s
Whoops, that was writing to the SSD for the root FS.... I was rather
surprised at the high number, but didn't stop and think properly.
Please see the revised stat:
dr:/mnt/imagestore# dd if=/dev/zero of=test.img bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 81.5089 s, 129 MB/s

However, note that this is happening while BackupPC_nightly is running,
so there is plenty of other IO happening in the background. (in v3 there
is a nightly process which checks every file in the pool to see if it is
still needed or not).
Post by Adam Goryachev
md0 : active raid5 sde1[4] sdc1[3] sdd1[2] sdb1[0]
11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2
[4/4] [UUUU]
Model Family: Western Digital Red (AF)
Device Model: WDC WD40EFRX-68WT0N0
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
I'm sure you said you had 7200rpm disks, so you should get even better
performance for both random r/w as well as streaming writes. Which
brings me back to my earlier concern that you are using a VM for
backuppc, it is sharing it's performance with other things, which
works very poorly when dealing with spinning disks (even a streaming
write like your example is mixed with other random I/O which means it
kills performance).
Please diagnose and resolve the underlying performance issues, then
come back to BPC and see how it performs.
Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 23:12:05 UTC
Permalink
Post by Adam Goryachev
Whoops, that was writing to the SSD for the root FS.... I was rather
surprised at the high number, but didn't stop and think properly.
dr:/mnt/imagestore# dd if=/dev/zero of=test.img bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 81.5089 s, 129 MB/s
This is mine:

# dd if=/dev/zero of=/var/backups/test.img bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 15.4692 s, 136 MB/s
Post by Adam Goryachev
Post by Adam Goryachev
I'm sure you said you had 7200rpm disks, so you should get even better
performance for both random r/w as well as streaming writes. Which
brings me back to my earlier concern that you are using a VM for
backuppc, it is sharing it's performance with other things, which
works very poorly when dealing with spinning disks (even a streaming
write like your example is mixed with other random I/O which means it
kills performance).
I'm not using a VM (please don't make confusion between users), It's a
DELL PowerEdge 2950 with 8GB ram, hardware raid and 6x2TB @7200 disks
in RAID-5

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-13 23:14:14 UTC
Permalink
Post by Adam Goryachev
However, note that this is happening while BackupPC_nightly is running,
so there is plenty of other IO happening in the background. (in v3 there
is a nightly process which checks every file in the pool to see if it is
still needed or not).
There is no nightly at this time here. I'm running nightly in the
middle of the day :)

$Conf{WakeupSchedule} = [12, 23, 24, 1, 2, 3, 4, 5, 6, 7];

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Holger Parplies
2016-01-14 17:28:56 UTC
Permalink
Hi,

much has been said in this thread, but I believe this has not:
You do realize that BackupPC 4.x is alpha software, right? You're lucky if it
works at all, you should be surprised if it performs well, and you're not
using it in a production environment anyway, right? Ok, the reality might not
be quite as bad, but you need to be aware that version 4 is not a newer
version 3 with some improvements, it is basically a re-write with a whole new
concept behind it.

If you are looking for something stable, I'd recommend BackupPC 3.x, which
you'll undoubtedly also get more and better support for here, if you even
need it. And yes, it *will* be slower than native rsync.

For me (and, I believe, for many others), version 3 (and even version 2!) works
so well that there is no reason to move to version 4.

Hope that helps.

Regards,
Holger

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-14 18:55:58 UTC
Permalink
Post by Holger Parplies
For me (and, I believe, for many others), version 3 (and even version 2!) works
so well that there is no reason to move to version 4.
I have to agree with that. Really about the only issue people have
with v3 is that if you ever want to copy the entire archive tree to a
new system or to make an offsite copy it is a problem for
file-oriented copy methods to recreate the large number of hardlinks.

The changes in v4 may help some situations, but I wonder if the
attempt to locate existing matching content from different locations
might waste time where there are large numbers of tiny files.

If you do try v3, note that using the checksum-seed=32761 option will
speed up full runs but you won't see the difference until the 3rd
full. That is, the block checksums are cached on the 2nd full copy of
a file and then subsequently used so the server side does not have to
uncompress and recompute them.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-14 22:00:07 UTC
Permalink
Post by Les Mikesell
Post by Holger Parplies
For me (and, I believe, for many others), version 3 (and even version 2!) works
so well that there is no reason to move to version 4.
I have to agree with that. Really about the only issue people have
with v3 is that if you ever want to copy the entire archive tree to a
new system or to make an offsite copy it is a problem for
file-oriented copy methods to recreate the large number of hardlinks.
The changes in v4 may help some situations, but I wonder if the
attempt to locate existing matching content from different locations
might waste time where there are large numbers of tiny files.
I must say that this is the only reason for me choosing to run v4. For
some reason, a number of systems I need to support backups for have a
workflow that involves photos taken, downloaded from the camera to the
server, and then renaming/copying/moving them to various different
folders. With v4, BPC only downloads them once, and each time they are
moved/copied to a different folder there is no transfer. This helps my
backups actually work/complete in a much shorter time.

However, for most other scenarios, I doubt it would make a lot of
difference.

I would suggest anyone considering BPC should use v3 unless they
specifically need the v4 features, and even then, are prepared for the
possible shortcomings inherent in using alpha software.

Regards,
Adam
--
Adam Goryachev
Website Managers
P: +61 2 8304 0000 ***@websitemanagers.com.au
F: +61 2 8304 0001 www.websitemanagers.com.au
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-16 12:04:14 UTC
Permalink
Post by Adam Goryachev
I would suggest anyone considering BPC should use v3 unless they
specifically need the v4 features, and even then, are prepared for the
possible shortcomings inherent in using alpha software.
I can't use v3 because i'm having servers with millions of file to be
transfered and v3 takes ages in "Building files list".
v4 uses native rsync and incremental file list. This is faster,
transfer start immediatly, not after many many hours.

BTW, 1 server was backuped properly, now is running "fsck #0". As I
understood from docs, BackupPC should not run when
fsck in running. I'm having a full dump of srv1, and srv2 that is
doing "fsck #0". Is this bad or fsck can be run parallel with other
backups?

Usually how long does it take to complete? this is slowing down the
other backup that seems to be frozen.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Alexander Moisseev
2016-01-16 18:53:44 UTC
Permalink
Post by Gandalf Corvotempesta
I can't use v3 because i'm having servers with millions of file to be
transfered and v3 takes ages in "Building files list".
You mentioned earlier you are trying to backup dovecot with Maildir. Do you have considered to move from Maildir to mdbox? It will definitely help with BackupPC and rsync performance. More over you will be able to use dovecot tools (+ your own scripts) or dovecot replication feature instead of rsync/BackupPC. It will increase performance dramatically.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-16 19:16:16 UTC
Permalink
Post by Alexander Moisseev
You mentioned earlier you are trying to backup dovecot with Maildir. Do you have considered to move from Maildir to mdbox? It will definitely help with BackupPC and rsync performance. More over you will be able to use dovecot tools (+ your own scripts) or dovecot replication feature instead of rsync/BackupPC. It will increase performance dramatically.
mdbox is available since version 2. Some server are very old, with dovecot 1.

Anyway, actually I have 2 full backup completed in 13 and 9 hours,
both started in paralles today at 01:20. Let's see tonight. Two
incremental should start...

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Alexander Moisseev
2016-01-18 07:51:25 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Alexander Moisseev
You mentioned earlier you are trying to backup dovecot with Maildir. Do you have considered to move from Maildir to mdbox? It will definitely help with BackupPC and rsync performance. More over you will be able to use dovecot tools (+ your own scripts) or dovecot replication feature instead of rsync/BackupPC. It will increase performance dramatically.
mdbox is available since version 2. Some server are very old, with dovecot 1.
I don't think old hardware will be a problem for version 2.

I am using low end hardware for a small mail server (P4 2.80GHz, 2GB RAM, gmirror of 2 SATA drives) and BackupPC server (P4 2.80GHz, 1.5GB RAM, SATA drive) and 100 Mbps networking.

About 2 years ago I was using Maildir and BPC v3 (rsyncd with checksum caching).
Mail storage had about 100 thousands files.
The first two full backups was completed in about 4 hours each.
Subsequent full backups was taken about 30 minutes (if there is no concurrent backups).
I never tried BPC v4 to backup Maildir.

Then I converted mail storage to mdbox + SIS (single instance attachment storage) + compression. I never tried BPC with mdbox, I see no point, but I believe file list building should be much quicker since file number is significantly lower. I installed dovecot (mdbox + SIS + compression) on a test machine with similar hardware configuration. Initial backup with "doveadm backup" took about 1-3 hours, I don't remember exact time. Subsequent everyday backups take about 1-2 minutes and don't depend on mail storage size or messages number, but on changes made between backups.

Finally for the backup end I've installed dovecot (mdbox + SIS + compression) on a server with ZFS. Now, in order to have retrospective, I'm taking snapshot after each backup and using BackupPC-like aging.
And "zfs send" to make offline copies.

Hostname | Level | Size (MB) | Written (MB) | Used (MB) | Last available | Days old
----------------------------------------------------------------------------------------
tank1/vmail | 0 | 27860.45 | 23650.66 | 2128.86 | 2015-07-11 07:51 | 191
tank1/vmail | 0 | 27804.61 | 2073.86 | 612.73 | 2015-08-12 07:51 | 159
tank1/vmail | 0 | 29287.60 | 2227.53 | 686.91 | 2015-09-13 07:51 | 127
tank1/vmail | 0 | 30896.84 | 2488.86 | 601.22 | 2015-10-15 07:51 | 95
tank1/vmail | 0 | 35778.12 | 6177.88 | 1852.94 | 2015-11-16 07:51 | 63
tank1/vmail | 0 | 37257.94 | 4151.30 | 443.62 | 2015-12-18 07:51 | 31
tank1/vmail | 0 | 37257.47 | 519.09 | 186.52 | 2015-12-22 07:51 | 27
tank1/vmail | 0 | 37273.77 | 545.84 | 191.96 | 2015-12-26 07:51 | 23
tank1/vmail | 0 | 37564.53 | 686.51 | 165.02 | 2015-12-30 07:51 | 19
tank1/vmail | 0 | 37533.15 | 415.20 | 129.96 | 2016-01-03 07:51 | 15
tank1/vmail | 0 | 37454.53 | 220.35 | 99.68 | 2016-01-07 07:51 | 11
tank1/vmail | 0 | 37419.07 | 170.54 | 107.71 | 2016-01-11 07:51 | 7
tank1/vmail | 0 | 37520.83 | 317.79 | 112.94 | 2016-01-12 07:51 | 6
tank1/vmail | 0 | 37633.12 | 328.19 | 113.92 | 2016-01-13 07:51 | 5
tank1/vmail | 0 | 37792.96 | 345.29 | 83.54 | 2016-01-14 07:51 | 4
tank1/vmail | 0 | 37961.14 | 302.97 | 61.54 | 2016-01-15 07:51 | 3
tank1/vmail | 0 | 38045.83 | 178.00 | 12.08 | 2016-01-16 07:51 | 2
tank1/vmail | 0 | 38060.19 | 31.03 | 15.53 | 2016-01-17 07:51 | 1
tank1/vmail | 0 | 38029.69 | 38.44 | 0.00 | 2016-01-18 07:51 | 0
tank1/vmail | - | 38029.69 | 0.00 | 32687.01 | - | -

NAME PROPERTY VALUE
tank1/vmail used 43.8G
tank1/vmail logicalused 51.1G

Actual total size of mail stored on the mailserver (prior SIS and compression) 62GB, 170 thousands of messages.
I know it is a very small server, but you can extrapolate the results.


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-18 01:46:00 UTC
Permalink
Post by Alexander Moisseev
Post by Gandalf Corvotempesta
I can't use v3 because i'm having servers with millions of file to be
transfered and v3 takes ages in "Building files list".
You mentioned earlier you are trying to backup dovecot with Maildir. Do you have considered to move from Maildir to mdbox? It will definitely help with BackupPC and rsync performance. More over you will be able to use dovecot tools (+ your own scripts) or dovecot replication feature instead of rsync/BackupPC. It will increase performance dramatically.
I'm not sure of the exact numbers you are referring to (millions is 1
million or 50 million?, also "ages" is 5 minutes or 5 hours). In any
case, back in 2012, I did a full backup of one of my mail servers, and
it was completed in 70 minutes, with approx 1.3 million files (that
would have been with BPC v3). Equally, a more recent backup with v4
shows it took 402 minutes (almost 7 hours) and the quickest (recent BPC
v4) full backup took 146 minutes.

So, while generating the file list might take a long time with BPC v3, I
don't think it should significantly change the overall backup time. I
would expect my BPC server has gotten quicker over the years, as has the
mail server. Variations are likely due to concurrent backups, and
bandwidth limitations.

It would be interesting to hear an update of where you are at, and what
has happened (either good or bad).

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Alexander Moisseev
2016-01-18 07:52:12 UTC
Permalink
Post by Adam Goryachev
So, while generating the file list might take a long time with BPC v3, I
don't think it should significantly change the overall backup time.
Assuming checksum caching configured, for first and second backup, yes. But it is not true for subsequent backups since most of Maildir files never change over time.



------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-18 08:38:03 UTC
Permalink
Post by Adam Goryachev
It would be interesting to hear an update of where you are at, and what
has happened (either good or bad).
Ok, today is the third day of backup running "properly".

Some things are not clear:

2016-01-16 01:32:24 full backup started for directory full
2016-01-16 14:55:54 full backup 0 complete, 4588991 files, 4588991
bytes, 3 xferErrs (0 bad files, 0 bad shares, 3 other)

2016-01-17 05:44:10 incr backup started for directory full
2016-01-17 15:08:04 incr backup 1 complete, 4594227 files, 4594227
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)

2016-01-18 06:00:04 full backup started for directory full

As you can see, I did a full backup 2016-01-16 01:32:24, then an
incremental on 2016-01-17 05:44:10. The third full @ 2016-01-18
06:00:04 is not normal.

I have this in my configuration:

$Conf{FullPeriod} = 27.97;
$Conf{FullKeepCnt} = 1;
$Conf{FullKeepCntMin} = 1;
$Conf{FullAgeMax} = 60;

$Conf{IncrPeriod} = 0.97;
$Conf{IncrKeepCnt} = 31;
$Conf{IncrKeepCntMin} = 7;
$Conf{IncrAgeMax} = 35;

$Conf{FillCycle} = 0;

so, it should create 1 full every 30 days (more or less), keeping just
1 full. All other days, it should create an incremental.

Why is creating a new full after 2 days the first one ?

Additionally, yesterday I had backup #0 set as full and #1 running as
incremental.
Now, #0 is not more available (is missing from web interface), #1 is
an incremental and #2 is active (is doing a full right now)

Exactly the same happens on a second client.

What I saw is that yesterday, it was copying #1 > #0. Is this normal ?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-18 12:56:26 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Adam Goryachev
It would be interesting to hear an update of where you are at, and what
has happened (either good or bad).
Ok, today is the third day of backup running "properly".
2016-01-16 01:32:24 full backup started for directory full
2016-01-16 14:55:54 full backup 0 complete, 4588991 files, 4588991
bytes, 3 xferErrs (0 bad files, 0 bad shares, 3 other)
2016-01-17 05:44:10 incr backup started for directory full
2016-01-17 15:08:04 incr backup 1 complete, 4594227 files, 4594227
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-18 06:00:04 full backup started for directory full
As you can see, I did a full backup 2016-01-16 01:32:24, then an
06:00:04 is not normal.
There is not a third full, only 2 fulls and one incremental.
Post by Gandalf Corvotempesta
$Conf{FullPeriod} = 27.97;
$Conf{FullKeepCnt} = 1;
$Conf{FullKeepCntMin} = 1;
$Conf{FullAgeMax} = 60;
$Conf{IncrPeriod} = 0.97;
$Conf{IncrKeepCnt} = 31;
$Conf{IncrKeepCntMin} = 7;
$Conf{IncrAgeMax} = 35;
$Conf{FillCycle} = 0;
so, it should create 1 full every 30 days (more or less), keeping just
1 full. All other days, it should create an incremental.
Why is creating a new full after 2 days the first one ?
Additionally, yesterday I had backup #0 set as full and #1 running as
incremental.
Now, #0 is not more available (is missing from web interface), #1 is
an incremental and #2 is active (is doing a full right now)
Well, that part makes sense, if #0 is deleted/missing, and you want one
full minimum, then the system must do a full backup. The question is why
did backup #0 vanish after the incremental and before the 2nd full was done?
Post by Gandalf Corvotempesta
Exactly the same happens on a second client.
What I saw is that yesterday, it was copying #1 > #0. Is this normal ?
Did you read the BPC v4 documentation on how the backup procedure works?
It discusses what happens with copying and renaming of backups and the
associated numbers. I don't have time to re-read it right now, but it
should provide some insight for you.

One thing to remember, it doesn't really matter how many full backups
are being done, my concern (if I was in your shoes) would be why the
backup disappeared.

Regards,
Adam

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-18 15:40:36 UTC
Permalink
Post by Adam Goryachev
Well, that part makes sense, if #0 is deleted/missing, and you want one
full minimum, then the system must do a full backup. The question is why
did backup #0 vanish after the incremental and before the 2nd full was done?
cut
Post by Adam Goryachev
One thing to remember, it doesn't really matter how many full backups
are being done, my concern (if I was in your shoes) would be why the
backup disappeared.
Exactly. I'm trying to debug why #0 disappeared.
The same happend with both server. The same happened even some days
ago (where all of my issue started)

I don't see anything in log (posted previously)

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-18 16:07:11 UTC
Permalink
On Mon, Jan 18, 2016 at 9:40 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Adam Goryachev
Well, that part makes sense, if #0 is deleted/missing, and you want one
full minimum, then the system must do a full backup. The question is why
did backup #0 vanish after the incremental and before the 2nd full was done?
cut
Post by Adam Goryachev
One thing to remember, it doesn't really matter how many full backups
are being done, my concern (if I was in your shoes) would be why the
backup disappeared.
Exactly. I'm trying to debug why #0 disappeared.
The same happend with both server. The same happened even some days
ago (where all of my issue started)
Why don't you put back the default settings to see they work as
expected? With v3, the default of weekly fulls works out pretty well
since a longer interval makes rsync transfer more since it sends
changes since the last full - or do more work tracking multiple
directories if you use incremental levels. Also I sometimes catch
typos or unexpected changes in config files by diffing them against
the default copy. In any case, bump up the FullKeepCntMin to
something bigger. Remember, with BPC, multiple copies don't cost
much to store.

And going back to v3, you could get an idea of how long it might take
to start by running something like this on the client:
"time find / -ctime 500 >/dev/null" (basically any option to cause
the inode values to be read)
plus some overhead for sending the list to the server (and a lot more
if it fills RAM and swaps...).
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-18 16:16:05 UTC
Permalink
Post by Les Mikesell
Why don't you put back the default settings to see they work as
expected?
Because i'm using default settings except posted lines.

I wont decrease full interval, not for waste of space but to not
transfer everything from the client, that is resource heavy.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-18 16:53:41 UTC
Permalink
On Mon, Jan 18, 2016 at 10:16 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Les Mikesell
Why don't you put back the default settings to see they work as
expected?
Because i'm using default settings except posted lines.
I wont decrease full interval, not for waste of space but to not
transfer everything from the client, that is resource heavy.
Does anyone understand the docs at
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html
for $Conf{FillCycle} ? It looks like expiring is really based on
'filled' backups, not necessarily full backups as in v3. The part I
don't get is that it says the most recent backup is always filled
whether it is full or incremental. If the most recent is filled, how
would any ever be unfilled?
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Alexander Moisseev
2016-01-18 18:25:07 UTC
Permalink
Post by Les Mikesell
Does anyone understand the docs at
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html
for $Conf{FillCycle} ? It looks like expiring is really based on
'filled' backups, not necessarily full backups as in v3. The part I
don't get is that it says the most recent backup is always filled
whether it is full or incremental. If the most recent is filled, how
would any ever be unfilled?
In v4 full/incremental related to backup transfer only, filled/unfilled related to storage.

Here is a short explanation:
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#BackupPC-4.0

"Backups are stored as "reverse deltas" - the most recent backup is always filled and older backups are reconstitued by merging all the deltas starting with the nearest future filled backup and wworking backwards.

This is the opposite of V3 where incrementals are stored as "forward deltas" to a prior backup (typically the last full backup or prior lower-level incremental backup, or the last full in the case of rsync)."


So, the latest backup should always be filled, i.e. it actually contains all files. After the last backup will be taken, files that latest backup contains will be removed from backups between the last backup and previous filled backup recursively. Until the last backup was taken, previous backup was filled, but now it is just "reverse delta".

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-18 18:33:45 UTC
Permalink
On Mon, Jan 18, 2016 at 12:25 PM, Alexander Moisseev
Post by Alexander Moisseev
Post by Les Mikesell
Does anyone understand the docs at
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html
for $Conf{FillCycle} ? It looks like expiring is really based on
'filled' backups, not necessarily full backups as in v3. The part I
don't get is that it says the most recent backup is always filled
whether it is full or incremental. If the most recent is filled, how
would any ever be unfilled?
In v4 full/incremental related to backup transfer only, filled/unfilled related to storage.
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#BackupPC-4.0
"Backups are stored as "reverse deltas" - the most recent backup is always filled and older backups are reconstitued by merging all the deltas starting with the nearest future filled backup and wworking backwards.
This is the opposite of V3 where incrementals are stored as "forward deltas" to a prior backup (typically the last full backup or prior lower-level incremental backup, or the last full in the case of rsync)."
So, the latest backup should always be filled, i.e. it actually contains all files. After the last backup will be taken, files that latest backup contains will be removed from backups between the last backup and previous filled backup recursively. Until the last backup was taken, previous backup was filled, but now it is just "reverse delta".
OK, but then how does that relate to the expiring part - which it says
is really based on the filled status?

--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Alexander Moisseev
2016-01-18 19:08:04 UTC
Permalink
Post by Les Mikesell
On Mon, Jan 18, 2016 at 12:25 PM, Alexander Moisseev
Post by Alexander Moisseev
Post by Les Mikesell
Does anyone understand the docs at
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html
for $Conf{FillCycle} ? It looks like expiring is really based on
'filled' backups, not necessarily full backups as in v3. The part I
don't get is that it says the most recent backup is always filled
whether it is full or incremental. If the most recent is filled, how
would any ever be unfilled?
In v4 full/incremental related to backup transfer only, filled/unfilled related to storage.
http://backuppc.sourceforge.net/BackupPC-4.0.0alpha3_doc.html#BackupPC-4.0
"Backups are stored as "reverse deltas" - the most recent backup is always filled and older backups are reconstitued by merging all the deltas starting with the nearest future filled backup and wworking backwards.
This is the opposite of V3 where incrementals are stored as "forward deltas" to a prior backup (typically the last full backup or prior lower-level incremental backup, or the last full in the case of rsync)."
So, the latest backup should always be filled, i.e. it actually contains all files. After the last backup will be taken, files that latest backup contains will be removed from backups between the last backup and previous filled backup recursively. Until the last backup was taken, previous backup was filled, but now it is just "reverse delta".
OK, but then how does that relate to the expiring part - which it says
is really based on the filled status?
v3: 'full' shouldn't be expired while there is at least one 'incremental' that depends on it
v4: 'filled' shouldn't be expired while there is at least one 'unfilled' that depends on it

Full/incremental doesn't matter for v4 expiration algorithm since it doesn't affect ability to restore particular backup.


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-19 08:39:12 UTC
Permalink
2016-01-18 16:40 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Exactly. I'm trying to debug why #0 disappeared.
The same happend with both server. The same happened even some days
ago (where all of my issue started)
Another backup is running.

now I have these statuses in web interface:
srv1: "backup full"
srv2: "merge #2 -> 1"

srv1 log:

2016-01-16 01:32:24 full backup started for directory full
2016-01-16 14:55:54 full backup 0 complete, 4588991 files, 4588991
bytes, 3 xferErrs (0 bad files, 0 bad shares, 3 other)
2016-01-17 05:44:10 incr backup started for directory full
2016-01-17 15:08:04 incr backup 1 complete, 4594227 files, 4594227
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-18 06:00:04 full backup started for directory full
2016-01-19 01:32:18 full backup 2 complete, 4597808 files, 4597808
bytes, 11 xferErrs (0 bad files, 0 bad shares, 11 other)
2016-01-19 09:10:06 incr backup started for directory full

srv2 log:

2016-01-16 01:32:29 full backup started for directory full
2016-01-16 10:51:25 full backup 0 complete, 2895473 files, 2895473
bytes, 13 xferErrs (0 bad files, 0 bad shares, 13 other)
2016-01-17 04:41:21 incr backup started for directory full
2016-01-17 20:20:46 incr backup 1 complete, 2892544 files, 2892544
bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)
2016-01-18 04:00:03 full backup started for directory full
2016-01-18 21:42:57 full backup 2 complete, 2888912 files, 2888912
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-19 05:38:12 incr backup started for directory full
2016-01-19 09:09:51 incr backup 3 complete, 2896236 files, 2896236
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)


Backup #0 is still missing from both clients.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-19 10:03:49 UTC
Permalink
2016-01-19 9:39 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
srv2: "merge #2 -> 1"
As expected, #2 now is missing. I have #1 (incremental), #3 (incremental)

there are some issues........ As I'm using standard BPC4
configuration, I think that there are some bugs.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-19 16:01:54 UTC
Permalink
On Tue, Jan 19, 2016 at 4:03 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
2016-01-19 9:39 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
srv2: "merge #2 -> 1"
As expected, #2 now is missing. I have #1 (incremental), #3 (incremental)
there are some issues........ As I'm using standard BPC4
configuration, I think that there are some bugs.
I'd bump up FullKeepCntMin and IncrKeepCntMin to the numbers you want
to see if that keeps them from being expired early. I always did that
with v3 too just in case something odd happened with the system clock
and to keep backups after decommissioning a host.

Also, since the convention for expiry parameters is
"FullKeepPeriod/FullKeepCnt" etc refer to *Filled* backups, and
"IncrKeepPeriod/IncrKeepCnt" refer to "Unfilled" backups if you change
the scheduling you may need to adjust the FillCycle setting.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-19 16:10:39 UTC
Permalink
Post by Les Mikesell
I'd bump up FullKeepCntMin and IncrKeepCntMin to the numbers you want
to see if that keeps them from being expired early. I always did that
with v3 too just in case something odd happened with the system clock
and to keep backups after decommissioning a host.
Bumped Full to 3 (it was 1) and Incremental to 21 (it was 7)

But, at least for Full, this is a great waste of space.
Post by Les Mikesell
Also, since the convention for expiry parameters is
"FullKeepPeriod/FullKeepCnt" etc refer to *Filled* backups, and
"IncrKeepPeriod/IncrKeepCnt" refer to "Unfilled" backups if you change
the scheduling you may need to adjust the FillCycle setting.
So, what do you suggest ? I would like to have 1 full every 30 days
and 1 incremental each day
being able to revert to each day up to full (30 days before)

I can also try with 1 full every 15 days, having 2 full each month,
but this will double the used space.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-19 16:31:33 UTC
Permalink
On Tue, Jan 19, 2016 at 10:10 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Les Mikesell
I'd bump up FullKeepCntMin and IncrKeepCntMin to the numbers you want
to see if that keeps them from being expired early. I always did that
with v3 too just in case something odd happened with the system clock
and to keep backups after decommissioning a host.
Bumped Full to 3 (it was 1) and Incremental to 21 (it was 7)
But, at least for Full, this is a great waste of space.
Post by Les Mikesell
Also, since the convention for expiry parameters is
"FullKeepPeriod/FullKeepCnt" etc refer to *Filled* backups, and
"IncrKeepPeriod/IncrKeepCnt" refer to "Unfilled" backups if you change
the scheduling you may need to adjust the FillCycle setting.
So, what do you suggest ? I would like to have 1 full every 30 days
and 1 incremental each day
being able to revert to each day up to full (30 days before)
I can also try with 1 full every 15 days, having 2 full each month,
but this will double the used space.
The space usage will only be affected by files that are
changed/deleted between runs. Any file that still has identical
content will be pooled into one stored instance regardless of how many
backups contain it. That's the main advantage of BPC over other
systems.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-19 16:46:47 UTC
Permalink
Post by Les Mikesell
The space usage will only be affected by files that are
changed/deleted between runs. Any file that still has identical
content will be pooled into one stored instance regardless of how many
backups contain it. That's the main advantage of BPC over other
systems.
Ok, let's try with 1 full every 15 days and 1 incremental every other
days. Up to 30 backups

Could you suggest a config ? There is something wrong with mine.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-20 08:23:00 UTC
Permalink
2016-01-19 11:03 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
As expected, #2 now is missing. I have #1 (incremental), #3 (incremental)
Now, another full is running for srv2.

I have:

SRV2:
#1 incremental
#2 incremental
#4 active (is a full)
#0 and #3 are missing.

SRV1:
#1 incremental
#3 incremental
#0, #2, #4 are missing.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 09:23:03 UTC
Permalink
2016-01-20 9:23 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Now, another full is running for srv2.
This is what I see from control panel during copying #5 => #4 (right now)

If I understood properly, only last backup should be filled. This
seems ok in my environment,
as all except #5 are not filled (filled=0). #5 is filled (filled=1)

But #1 and #3 should not be incremental, there is no full to use as
starting point for their, or, better,
there was a full in #0, but #0 is disappeared like #2

Look at duration time. #1, incremental, took more than #4, full. I
think #1 was an incremental without a full as starting point, thus has
transfered everything. (an incremental starting from an empty set, is
like a full)
#3 is really an incremental, as it took only 210 minutes (very good),
but why #4 is another full? And why now is copying #5 to #4 ?

The same is for another server (actually in idle). Last backup
(yesterday) was a full. Next backup should be an incremental but I
think it will start another full.
Gandalf Corvotempesta
2016-01-21 09:42:23 UTC
Permalink
2016-01-21 10:23 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
This is what I see from control panel during copying #5 => #4 (right now)
Copy finished now is running a backup (seems to be an incremental)
In status column I see: "backup full" and in Type column: "incr"

What does "backup full" means if type is "incr" ?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 10:44:50 UTC
Permalink
On 21 January 2016 8:42:23 pm AEDT, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
2016-01-21 10:23 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
This is what I see from control panel during copying #5 => #4 (right
now)
Copy finished now is running a backup (seems to be an incremental)
In status column I see: "backup full" and in Type column: "incr"
What does "backup full" means if type is "incr" ?
Silly question but have you named the host full or the share full?
A lot of your logs that you posted seemed to have full as one of those
names. It could be confusing bpc and it's unlikely that anyone has
tested that scenario.

Regards
Adam

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 10:58:20 UTC
Permalink
Post by Adam Goryachev
Silly question but have you named the host full or the share full?
A lot of your logs that you posted seemed to have full as one of those
names. It could be confusing bpc and it's unlikely that anyone has
tested that scenario.
Share is named "full" as point to the whole server.
So, "Backup full" means that is backiung up the share "full" ? That makes sense.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 11:16:27 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Adam Goryachev
Silly question but have you named the host full or the share full?
A lot of your logs that you posted seemed to have full as one of those
names. It could be confusing bpc and it's unlikely that anyone has
tested that scenario.
Share is named "full" as point to the whole server.
So, "Backup full" means that is backiung up the share "full" ? That makes sense.
OK, I'd suggest to try changing this to something else. Usually I would
backup / which obviously means the whole server (ie, root folder and
everything under it). Though I also tend to specify to only backup the
single filesystem to avoid backups of /proc or /sys and so on. So if you
have multiple mount points, then you need to specify each one as a "share".

Are you using rsyncd as opposed to rsync? That is really the only method
I can think of to have the "share name" not equal to a actual directory
or path....

Regards,
Adam

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 11:22:37 UTC
Permalink
Post by Adam Goryachev
OK, I'd suggest to try changing this to something else. Usually I would
backup / which obviously means the whole server (ie, root folder and
everything under it). Though I also tend to specify to only backup the
single filesystem to avoid backups of /proc or /sys and so on. So if you
have multiple mount points, then you need to specify each one as a "share".
/proc and /sys are already excluded as well tmp and something else
Post by Adam Goryachev
Are you using rsyncd as opposed to rsync? That is really the only method
I can think of to have the "share name" not equal to a actual directory
or path....
I'm using rsyncd.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 11:41:19 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Adam Goryachev
OK, I'd suggest to try changing this to something else. Usually I would
backup / which obviously means the whole server (ie, root folder and
everything under it). Though I also tend to specify to only backup the
single filesystem to avoid backups of /proc or /sys and so on. So if you
have multiple mount points, then you need to specify each one as a "share".
/proc and /sys are already excluded as well tmp and something else
Post by Adam Goryachev
Are you using rsyncd as opposed to rsync? That is really the only method
I can think of to have the "share name" not equal to a actual directory
or path....
I'm using rsyncd.
Try changing the sharename, see if it makes a difference. You could try
"complete" or similar. Worst case, it makes no difference, but best
case, it will solve your problem, and provide a big clue about a bug.

Regards,
Adam

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 11:52:35 UTC
Permalink
Post by Adam Goryachev
Try changing the sharename, see if it makes a difference. You could try
"complete" or similar. Worst case, it makes no difference, but best
case, it will solve your problem, and provide a big clue about a bug.
Actually i have some backups running. I'll change where these are finished.

Another strange thing that i've found:

#0 does not exist
#1 is incremental. But transfered files number is 2892544 (almost the
whole server). This is ok if an incremental was made based on an empty
set (#0 does not exists, so all file where transferred)
#3 is another incremental but files number is 2896236. That's too
much. Almost the same as #1

I see these in "File Size/Count Reuse Summary" table on web interface.
Are these number referred to a complete backup and not to the selected backup ?

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-21 15:30:24 UTC
Permalink
On Thu, Jan 21, 2016 at 3:23 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
This is what I see from control panel during copying #5 => #4 (right now)
If I understood properly, only last backup should be filled. This
seems ok in my environment,
as all except #5 are not filled (filled=0). #5 is filled (filled=1)
But #1 and #3 should not be incremental, there is no full to use as
starting point for their, or, better,
there was a full in #0, but #0 is disappeared like #2
V4 does it backwards from v3. The last backup is always filled and
the older ones are changed to reverse deltas. It must move/copy
things around to arrange that. And the full and incremental runs
aren't tied to keeping filled/unfilled backups anymore. But, I still
don't see why any expired already.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 16:13:38 UTC
Permalink
Post by Les Mikesell
V4 does it backwards from v3. The last backup is always filled and
the older ones are changed to reverse deltas. It must move/copy
things around to arrange that. And the full and incremental runs
aren't tied to keeping filled/unfilled backups anymore. But, I still
don't see why any expired already.
Another this to know is why BPC is fsck every backups every time.
After each backup, it will run fsck. It should run just for the last
executed backup, as previously ones was alreday checked.

But is running fsck every time for each backup that I have, taking
many many hours. Like now:
srv1 is fsck #1, then #2, then #4. after all, it will run a new backup
and start again the fsck

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-21 17:43:07 UTC
Permalink
On Thu, Jan 21, 2016 at 10:13 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Les Mikesell
V4 does it backwards from v3. The last backup is always filled and
the older ones are changed to reverse deltas. It must move/copy
things around to arrange that. And the full and incremental runs
aren't tied to keeping filled/unfilled backups anymore. But, I still
don't see why any expired already.
Another this to know is why BPC is fsck every backups every time.
After each backup, it will run fsck. It should run just for the last
executed backup, as previously ones was alreday checked.
But is running fsck every time for each backup that I have, taking
srv1 is fsck #1, then #2, then #4. after all, it will run a new backup
and start again the fsck
I see some earlier mail list messages saying that is normal:
http://sourceforge.net/p/backuppc/mailman/message/34478542/
I guess that's when it removes files that are not in any current
backup. Apparently it only takes a long time when you have a very
large number of files.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 18:45:05 UTC
Permalink
Post by Les Mikesell
http://sourceforge.net/p/backuppc/mailman/message/34478542/
I guess that's when it removes files that are not in any current
backup. Apparently it only takes a long time when you have a very
large number of files.
I understood what is happening by looking at the code. I need help to
debug more.

When BackupPC_dump is running, a temporary "needFsck.dump" is created
(line #985)
This will trigger an fsck on next run (lines #723-#730)

Temporary file is deleted at #1159 but in my case this is not
happening, thus, an fsck is always run.

So, in my case, the issue is between lines #985 and #1159

Probably, my backups are always detected as failed. A failed backup
will exit without removing the temporary fsck file (line #1152)


Somethimes I'm getting these errors:

2016-01-21 13:53:09 Got fatal error during xfer (rsync error: error in
rsync protocol data stream (code 12) at io.c(629) [generator=3.0.9.3])
2016-01-21 13:53:15 Backup aborted (rsync error: error in rsync
protocol data stream (code 12) at io.c(629) [generator=3.0.9.3])

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 22:09:21 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Les Mikesell
http://sourceforge.net/p/backuppc/mailman/message/34478542/
I guess that's when it removes files that are not in any current
backup. Apparently it only takes a long time when you have a very
large number of files.
I understood what is happening by looking at the code. I need help to
debug more.
When BackupPC_dump is running, a temporary "needFsck.dump" is created
(line #985)
This will trigger an fsck on next run (lines #723-#730)
Temporary file is deleted at #1159 but in my case this is not
happening, thus, an fsck is always run.
So, in my case, the issue is between lines #985 and #1159
Probably, my backups are always detected as failed. A failed backup
will exit without removing the temporary fsck file (line #1152)
Which would explain the "missing" backup numbers... possibly. I think if
you get an error, then you will end up with a missing backup number, so
you won't have consecutive backup numbers (even if backups haven't been
expired).
Post by Gandalf Corvotempesta
2016-01-21 13:53:09 Got fatal error during xfer (rsync error: error in
rsync protocol data stream (code 12) at io.c(629) [generator=3.0.9.3])
2016-01-21 13:53:15 Backup aborted (rsync error: error in rsync
protocol data stream (code 12) at io.c(629) [generator=3.0.9.3])
Can you run dmesg, and see if you have any lines like this:
[11613050.504117] rsync_bpc[7279]: segfault at 7f9ee5c7e428 ip
00000000004473af sp 00007ffc3d7bdf80 error 4 in rsync_bpc[400000+75000]

There seems to be some bug in rsync_bpc, I was working on tracking that
down last week, but my C programming is rather limited, so I'm stuck.
Hoping someone else on the bpc-dev list might be able to assist.
Otherwise, I might try one of the online programming forums to see if I
can get some assistance there.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 22:20:10 UTC
Permalink
Post by Adam Goryachev
[11613050.504117] rsync_bpc[7279]: segfault at 7f9ee5c7e428 ip
00000000004473af sp 00007ffc3d7bdf80 error 4 in rsync_bpc[400000+75000]
There seems to be some bug in rsync_bpc, I was working on tracking that
down last week, but my C programming is rather limited, so I'm stuck.
Hoping someone else on the bpc-dev list might be able to assist.
Otherwise, I might try one of the online programming forums to see if I
can get some assistance there.
I don't have any lines like yours.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 22:46:32 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Adam Goryachev
[11613050.504117] rsync_bpc[7279]: segfault at 7f9ee5c7e428 ip
00000000004473af sp 00007ffc3d7bdf80 error 4 in rsync_bpc[400000+75000]
There seems to be some bug in rsync_bpc, I was working on tracking that
down last week, but my C programming is rather limited, so I'm stuck.
Hoping someone else on the bpc-dev list might be able to assist.
Otherwise, I might try one of the online programming forums to see if I
can get some assistance there.
I don't have any lines like yours.
Then you have some other error causing problems. Check various things
such as permissions, disk space, errors on the client, etc. Something is
causing rsync to fail, and you will need to fix that to get successful
backups. Check both system logs as well as rsync logs on both server and
client.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-23 11:10:33 UTC
Permalink
Post by Adam Goryachev
Then you have some other error causing problems. Check various things
such as permissions, disk space, errors on the client, etc. Something is
causing rsync to fail, and you will need to fix that to get successful
backups. Check both system logs as well as rsync logs on both server and
client.
Started from scratch another time.
This time with XFS and not Ext4.

srv2 took 638.1 minutes to created first full. This is good. no rsync
errors in log file:

2016-01-22 23:00:01 Created directory /var/backups/backuppc/pc/srv2/refCnt
2016-01-22 23:00:01 full backup started for directory full
2016-01-23 09:38:07 full backup 0 complete, 2912565 files, 2912565
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)

As expected there is no "needFsck.dump" file, so on next run, fsck
should not run.

Actually, srv2 should be in a clean state, if on next run BPC would
remove backup #0, there is a bug in BPC.

The same is happing for srv1 (backup is still running, "count update #3/5"):

2016-01-22 22:41:19 Created directory /var/backups/backuppc/pc/srv1/refCnt
2016-01-22 22:41:19 full backup started for directory full
2016-01-23 11:59:06 full backup 0 complete, 4628296 files, 4628296
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)

Let's see what would happen this night.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-24 09:48:45 UTC
Permalink
2016-01-23 12:10 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Let's see what would happen this night.
The same happened again.

2016-01-22 22:41:19 Created directory /var/backups/backuppc/pc/srv1/refCnt
2016-01-22 22:41:19 full backup started for directory full
2016-01-23 11:59:06 full backup 0 complete, 4628296 files, 4628296
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-23 23:43:08 incr backup started for directory full
2016-01-24 01:21:25 incr backup 1 complete, 4630086 files, 4630086
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)

Now #0 is missing. I only have #1 set as incremental.

The same for srv2


2016-01-23 12:00:01 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
2016-01-23 12:00:01 Running BackupPC_nightly -m -P 6 0 127 (pid=4993)
2016-01-23 12:00:01 Running BackupPC_nightly -P 6 128 255 (pid=4994)
2016-01-23 12:00:01 Next wakeup is 2016-01-23 23:00:00
2016-01-23 12:00:01 BackupPC_nightly now running
BackupPC_refCountUpdate -m -s -c -P 6 -r 128-255
2016-01-23 12:00:01 BackupPC_nightly now running
BackupPC_refCountUpdate -m -s -c -P 6 -r 0-127
2016-01-23 12:04:55 admin : Missing pool file
0010f3fbc8b305b003800d08a415e9f4 count 1
2016-01-23 12:12:53 admin : Missing pool file
15bf91720300106d0300107fb128e928 count 1
2016-01-23 12:13:22 admin1 : Missing pool file
a20eddff930103001012ef8c93010300 count 1
2016-01-23 12:28:36 admin : BackupPC_refCountPrint: total errors: 2
2016-01-23 12:28:36 admin : xferPids
2016-01-23 12:28:36 BackupPC_nightly now running BackupPC_sendEmail
2016-01-23 12:28:37 admin1 : BackupPC_refCountPrint: total errors: 1
2016-01-23 12:28:37 admin1 : xferPids
2016-01-23 12:28:37 Finished admin1 (BackupPC_nightly -P 6 128 255)
2016-01-23 12:28:48 Finished admin (BackupPC_nightly -m -P 6 0 127)
2016-01-23 12:28:48 Pool nightly clean removed 0 files of size 0.00GB
2016-01-23 12:28:48 Pool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2016-01-23 12:28:48 Cpool nightly clean removed 0 files of size 0.00GB
2016-01-23 12:28:48 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2016-01-23 12:28:48 Pool4 nightly clean removed 0 files of size 0.00GB
2016-01-23 12:28:48 Pool4 is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2016-01-23 12:28:48 Cpool4 nightly clean removed 0 files of size 0.00GB
2016-01-23 12:28:48 Cpool4 is 281.26GB, 5227397 files (0 repeated, 0
max chain, 144350 max links), 16512 directories
2016-01-23 12:28:48 Running BackupPC_rrdUpdate (pid=5069)
2016-01-23 12:28:50 admin-1 : /usr/bin/rrdtool is not a valid executable
2016-01-23 12:28:50 Finished admin-1 (BackupPC_rrdUpdate)
2016-01-23 12:42:36 Finished full backup on srv1
2016-01-23 23:00:00 Next wakeup is 2016-01-24 00:00:00
2016-01-23 23:43:08 Started incr backup on srv1 (pid=5411, share=full)
2016-01-23 23:44:50 Started incr backup on srv9 (pid=5410, share=full)
2016-01-24 00:00:00 Reading hosts file
2016-01-24 00:00:00 Next wakeup is 2016-01-24 01:00:00
2016-01-24 00:10:22 Re-read config file because of a SIG_HUP
2016-01-24 00:10:22 Next wakeup is 2016-01-24 01:00:00
2016-01-24 01:00:00 Reading hosts file
2016-01-24 01:00:00 Next wakeup is 2016-01-24 02:00:00
2016-01-24 01:21:25 srv1: removing filled backup 0
2016-01-24 01:59:46 srv9: removing filled backup 0
2016-01-24 02:00:00 Next wakeup is 2016-01-24 03:00:00
2016-01-24 03:00:00 Next wakeup is 2016-01-24 04:00:00
2016-01-24 04:00:00 Next wakeup is 2016-01-24 05:00:00
2016-01-24 05:00:00 Next wakeup is 2016-01-24 06:00:00
2016-01-24 06:00:00 Next wakeup is 2016-01-24 07:00:00
2016-01-24 07:00:00 Next wakeup is 2016-01-24 12:00:00
2016-01-24 07:50:12 Finished incr backup on srv9
2016-01-24 08:13:52 Finished incr backup on srv1


as you can see, @2016-01-24 01:21:25 and @2016-01-24 01:59:46 BPC has
removed #0 from both servers.

I don't think this is normal. I'm started from scratch for the third
(or fourth) time, with no pool files or previous dumps. Its a plain
installation of BPC and something is not working properly.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-24 22:17:19 UTC
Permalink
Post by Gandalf Corvotempesta
2016-01-23 12:10 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Let's see what would happen this night.
The same happened again.
2016-01-22 22:41:19 Created directory /var/backups/backuppc/pc/srv1/refCnt
2016-01-22 22:41:19 full backup started for directory full
2016-01-23 11:59:06 full backup 0 complete, 4628296 files, 4628296
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-23 23:43:08 incr backup started for directory full
2016-01-24 01:21:25 incr backup 1 complete, 4630086 files, 4630086
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
Now #0 is missing. I only have #1 set as incremental.
Could you please try changing the share name to something other than full.

Regards,
Adam


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-24 22:28:46 UTC
Permalink
Post by Adam Goryachev
Could you please try changing the share name to something other than full.
I'll try tomorrow morning. I would like to keep this backup running.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-25 08:38:10 UTC
Permalink
Post by Adam Goryachev
Could you please try changing the share name to something other than full.
Changed share name to "everything".
Now I'm running a brand new backup to a brand new client.

Let's see.....

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-25 17:22:32 UTC
Permalink
Post by Adam Goryachev
Could you please try changing the share name to something other than full.
I've changed the share name to "everything" and added a brand new
client to backup.

2016-01-25 09:49:44 full backup started for directory everything
2016-01-25 18:06:03 full backup 0 complete, 2275732 files, 2275732
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)

Let's see if tonight backup #0 would be removed as usual

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Les Mikesell
2016-01-26 03:27:47 UTC
Permalink
On Mon, Jan 25, 2016 at 11:22 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Adam Goryachev
Could you please try changing the share name to something other than full.
I've changed the share name to "everything" and added a brand new
client to backup.
2016-01-25 09:49:44 full backup started for directory everything
2016-01-25 18:06:03 full backup 0 complete, 2275732 files, 2275732
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
Let's see if tonight backup #0 would be removed as usual
It might be worth looking through the web interface at the values for
all the settings. Sometimes you can have a syntax error in the text
file that is just interpreted by perl as something unexpected -
something like using double quotes instead of single around a string
containing the @ symbol. The web editor should show you how perl
parses the settings.
--
Les Mikesell
***@gmail.com

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-26 15:48:44 UTC
Permalink
Post by Les Mikesell
It might be worth looking through the web interface at the values for
all the settings. Sometimes you can have a syntax error in the text
file that is just interpreted by perl as something unexpected -
something like using double quotes instead of single around a string
parses the settings.
Already checked. No issue.
This is a screenshot for the Schedule configuration tab in web interface.
Gandalf Corvotempesta
2016-01-26 15:46:02 UTC
Permalink
2016-01-25 18:22 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Let's see if tonight backup #0 would be removed as usual
As expected, srv3 is now in "delete #0" phase.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-26 15:51:38 UTC
Permalink
2016-01-26 16:46 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
As expected, srv3 is now in "delete #0" phase.
This is the log for today, as you can see, BPC is removing filled
backups, thus removing also first full.
For srv2, is actually in "merge #2 -> 1" phase, dont't know if this is
normal, but removing backup #0
is not normal for sure.

2016-01-26 12:00:00 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
2016-01-26 12:00:00 Running BackupPC_nightly -m -P 8 0 127 (pid=25444)
2016-01-26 12:00:00 Running BackupPC_nightly -P 8 128 255 (pid=25445)
2016-01-26 12:00:00 Next wakeup is 2016-01-26 23:00:00
2016-01-26 12:00:01 BackupPC_nightly now running
BackupPC_refCountUpdate -m -s -c -P 8 -r 128-255
2016-01-26 12:00:01 BackupPC_nightly now running
BackupPC_refCountUpdate -m -s -c -P 8 -r 0-127
2016-01-26 12:02:20 admin : Missing pool file
0010f3fbc8b305b003800d08a415e9f4 count 1
2016-01-26 12:02:20 admin : Missing pool file
0080a098b205a403fa0efc0e818a01b1 count 1
2016-01-26 12:16:30 admin : Missing pool file
15bf91720300106d0300107fb128e928 count 1
2016-01-26 12:19:41 admin : Missing pool file
1c66d1f40253456c0defd13610e6ab2c count 1
2016-01-26 12:41:07 admin : Missing pool file
5f6578706972650000fbfc91b405a403 count 1
2016-01-26 12:54:58 admin1 : Missing pool file
95c88f01030010742e69740000d1e9cb count 1
2016-01-26 12:58:06 admin1 : Missing pool file
9d40c14c2a099c39bea4b7270db3a39a count 1
2016-01-26 12:59:56 admin1 : Missing pool file
a20eddff930103001012ef8c93010300 count 1
2016-01-26 13:26:08 admin : BackupPC_refCountPrint: total errors: 5
2016-01-26 13:26:08 admin : xferPids
2016-01-26 13:26:09 BackupPC_nightly now running BackupPC_sendEmail
2016-01-26 13:26:12 Finished admin (BackupPC_nightly -m -P 8 0 127)
2016-01-26 13:53:36 admin1 : BackupPC_refCountPrint: total errors: 3
2016-01-26 13:53:36 admin1 : xferPids
2016-01-26 13:53:36 Finished admin1 (BackupPC_nightly -P 8 128 255)
2016-01-26 13:53:36 Pool nightly clean removed 0 files of size 0.00GB
2016-01-26 13:53:36 Pool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2016-01-26 13:53:36 Cpool nightly clean removed 0 files of size 0.00GB
2016-01-26 13:53:36 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2016-01-26 13:53:36 Pool4 nightly clean removed 0 files of size 0.00GB
2016-01-26 13:53:36 Pool4 is 0.00GB, 0 files (0 repeated, 0 max chain,
0 max links), 0 directories
2016-01-26 13:53:36 Cpool4 nightly clean removed 17893 files of size 0.12GB
2016-01-26 13:53:36 Cpool4 is 310.43GB, 7398321 files (0 repeated, 0
max chain, 144973 max links), 16512 directories
2016-01-26 13:53:36 Running BackupPC_rrdUpdate (pid=25575)
2016-01-26 13:53:38 admin-1 : /usr/bin/rrdtool is not a valid executable
2016-01-26 13:53:38 Finished admin-1 (BackupPC_rrdUpdate)
2016-01-26 14:58:53 Started incr backup on srv3 (pid=25446, share=everything)
2016-01-26 16:12:06 srv2: removing filled backup 2
2016-01-26 16:27:25 srv3: removing filled backup 0

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-26 17:21:48 UTC
Permalink
2016-01-26 16:46 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
As expected, srv3 is now in "delete #0" phase.
I think that BPC is trying to remove "common" files between the last
executed backup and the previous one
because filled is only the last.

So, in case of #1, it tries to remove common files from #0, but for
some strange reason, is removing all files and the whole backup.

The same happens for every backup. With #3, it remove the whole #2.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Stephen
2016-01-26 20:13:33 UTC
Permalink
Post by Gandalf Corvotempesta
2016-01-26 16:46 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
As expected, srv3 is now in "delete #0" phase.
I think that BPC is trying to remove "common" files between the last
executed backup and the previous one
because filled is only the last.
So, in case of #1, it tries to remove common files from #0, but for
some strange reason, is removing all files and the whole backup.
The same happens for every backup. With #3, it remove the whole #2.
Have you seen this in the v4 docs?:

<quote>
$Conf{FullKeepCnt} = 1;

Number of filled backups to keep. Must be >= 1.

Note: Starting in V4+, deleting backups is done based on Fill/Unfilled,
not whether the original backup was full/incremental. reasons these
parameters continue to be called FullKeepCnt, rather than FilledKeepCnt. If
$Conf{FillCycle} is 0, then full backups continue to be filled, so the
terms are interchangeable. For V3 backups, the expiry settings have their
original meanings.

In the steady state, each time a full backup completes successfully the
oldest one is removed. If this number is decreased, the extra old backups
will be removed.
</quote>

Re-read the last paragraph.

A similar admonishment applies to IncrKeepCnt:

<quot>
Note: Starting in V4+, deleting backups is done based on Fill/Unfilled, not
whether the original backup was full/incremental. For historical reasons
these parameters continue to be called IncrKeepCnt, rather than
UnfilledKeepCnt. If $Conf{FillCycle} is 0, then incremental backups
continue to be unfilled, so the terms are interchangeable. For V3 backups,
the expiry settings have their original meanings.

In the steady state, each time an incr backup completes successfully the
oldest one is removed. If this number is decreased, the extra old backups
will be removed.
</quote>

Now, all of that said... I'd suggest you change your FullKeepCnt value to 2
(or greater) and see what happens; there may be a hidden edge-case there. I
have not seen the problem you're having. My "Schedule" settings are:

FullPeriod: 29.5
FillCycle: 0
FullKeepCnt: 5, 0, 6, 0, 2, 0, 1
FullKeepCntMin: 1
FullAgeMax: 90
IncrPeriod: 0.5
IncrKeepCnt: 30
IncrKeepCntMin: 1
IncrAgeMax: 30

You may also want to read the FillCycle setting documentation. It can
affect which backups are filled/unfilled, but I'd recommend changing only
one variable at a time.

Hope this helps!

Cheers,
Stephen


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Gandalf Corvotempesta
2016-01-24 22:19:06 UTC
Permalink
2016-01-24 10:48 GMT+01:00 Gandalf Corvotempesta
Post by Gandalf Corvotempesta
I don't think this is normal. I'm started from scratch for the third
(or fourth) time, with no pool files or previous dumps. Its a plain
installation of BPC and something is not working properly.
Another full is started. This is not OK, it should start an
incremental, but BPC has removed #0 (it was a full),
thus now is always running full

2016-01-22 22:41:19 Created directory /var/backups/backuppc/pc/srv1/refCnt
2016-01-22 22:41:19 full backup started for directory full
2016-01-23 11:59:06 full backup 0 complete, 4628296 files, 4628296
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-23 23:43:08 incr backup started for directory full
2016-01-24 01:21:25 incr backup 1 complete, 4630086 files, 4630086
bytes, 1 xferErrs (0 bad files, 0 bad shares, 1 other)
2016-01-24 23:00:04 full backup started for directory full


Why BPC is removing all #0 backups? Every time.............

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Stephen
2016-01-21 22:58:44 UTC
Permalink
I have glanced at the code and also believe that BackupPC_fsck is running
unnecessarily after every backup attempt, whether it is successful or not.

In my xferLogs, BackupPC_refCountUpdate is being called twice at the end of
a backup. Once like this:

Xfer PIDs are now
Running BackupPC_refCountUpdate -h afsgaia1.cas.unc.edu on somehost.unc.edu
xferPids 4508
BackupPC_refCountUpdate: host somehost.unc.edu got 0 errors
BackupPC_refCountPrint: total errors: 0
xferPids
Finished BackupPC_refCountUpdate (running time: 16 sec)

Then again like this:
Running BackupPC_refCountUpdate -h somehost.unc.edu -f -c on somehost.unc.edu
xferPids 4509
BackupPC_refCountUpdate: host somehost.unc.edu got 0 errors
BackupPC_refCountPrint: total errors: 0
xferPids
Finished BackupPC_refCountUpdate (running time: 1334 sec)

The second refCountUpdate includes the "-f -c" args which appear to force
an fsck on the host.

I'm using rsync over ssh and there are no errors reported:

Done: 0 errors, 31 filesExist, 86366561 sizeExist, 47163323 sizeExistComp,
0 filesTotal, 0 sizeTotal, 49 filesNew, 180552022 sizeNew, 58504938
sizeNewComp, 242121 inode
Number of files: 105259
Number of files transferred: 179
Total file size: 2823131278 bytes
Total transferred file size: 292936438 bytes
Literal data: 18830521 bytes
Matched data: 248088062 bytes
File list size: 2308933
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 616768
Total bytes received: 21567585
sent 616768 bytes received 21567585 bytes 36638.07 bytes/sec
total size is 2823131278 speedup is 127.26
DoneGen: 0 errors, 2 filesExist, 8306 sizeExist, 61440 sizeExistComp, 90549
filesTotal, 2823131278 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp,
242138 inode

Unlike Gandalf, my backups are succeeding. I have successfully performed
restores. Fulls and incrs happen as expected. Expiration looks correct per
my config policy.
Post by Gandalf Corvotempesta
Post by Adam Goryachev
[11613050.504117] rsync_bpc[7279]: segfault at 7f9ee5c7e428 ip
00000000004473af sp 00007ffc3d7bdf80 error 4 in rsync_bpc[400000+75000]
There seems to be some bug in rsync_bpc, I was working on tracking that
down last week, but my C programming is rather limited, so I'm stuck.
Hoping someone else on the bpc-dev list might be able to assist.
Otherwise, I might try one of the online programming forums to see if I
can get some assistance there.
I don't have any lines like yours.
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 23:22:30 UTC
Permalink
Post by Stephen
I have glanced at the code and also believe that BackupPC_fsck is running
unnecessarily after every backup attempt, whether it is successful or not.
In my xferLogs, BackupPC_refCountUpdate is being called twice at the end of
Xfer PIDs are now
Running BackupPC_refCountUpdate -h afsgaia1.cas.unc.edu on somehost.unc.edu
xferPids 4508
BackupPC_refCountUpdate: host somehost.unc.edu got 0 errors
BackupPC_refCountPrint: total errors: 0
xferPids
Finished BackupPC_refCountUpdate (running time: 16 sec)
Running BackupPC_refCountUpdate -h somehost.unc.edu -f -c on somehost.unc.edu
xferPids 4509
BackupPC_refCountUpdate: host somehost.unc.edu got 0 errors
BackupPC_refCountPrint: total errors: 0
xferPids
Finished BackupPC_refCountUpdate (running time: 1334 sec)
The second refCountUpdate includes the "-f -c" args which appear to force
an fsck on the host.
Done: 0 errors, 31 filesExist, 86366561 sizeExist, 47163323 sizeExistComp,
0 filesTotal, 0 sizeTotal, 49 filesNew, 180552022 sizeNew, 58504938
sizeNewComp, 242121 inode
Number of files: 105259
Number of files transferred: 179
Total file size: 2823131278 bytes
Total transferred file size: 292936438 bytes
Literal data: 18830521 bytes
Matched data: 248088062 bytes
File list size: 2308933
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 616768
Total bytes received: 21567585
sent 616768 bytes received 21567585 bytes 36638.07 bytes/sec
total size is 2823131278 speedup is 127.26
DoneGen: 0 errors, 2 filesExist, 8306 sizeExist, 61440 sizeExistComp, 90549
filesTotal, 2823131278 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp,
242138 inode
Unlike Gandalf, my backups are succeeding. I have successfully performed
restores. Fulls and incrs happen as expected. Expiration looks correct per
my config policy.
Yes, exactly. It seems to run the full fsck even when not required.
Preventing the second run is probably easy, but working out why it was
written that way, and ensuring it really isn't needed, that is harder.....

If you have some extra time to look into the code, then I'm sure many
people would appreciate any analysis you can do.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-21 23:57:05 UTC
Permalink
Post by Stephen
I have glanced at the code and also believe that BackupPC_fsck is running
unnecessarily after every backup attempt, whether it is successful or not.
Please check if you have a file called "needFsck.dump" in your refCnt directory.
This file is created when backups starts and should be automatically
deleted when it ends.

I think that for some reason this file is never removed and thus an
fsck is always forced.

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-21 22:11:35 UTC
Permalink
Post by Les Mikesell
On Thu, Jan 21, 2016 at 10:13 AM, Gandalf Corvotempesta
Post by Gandalf Corvotempesta
Post by Les Mikesell
V4 does it backwards from v3. The last backup is always filled and
the older ones are changed to reverse deltas. It must move/copy
things around to arrange that. And the full and incremental runs
aren't tied to keeping filled/unfilled backups anymore. But, I still
don't see why any expired already.
Another this to know is why BPC is fsck every backups every time.
After each backup, it will run fsck. It should run just for the last
executed backup, as previously ones was alreday checked.
But is running fsck every time for each backup that I have, taking
srv1 is fsck #1, then #2, then #4. after all, it will run a new backup
and start again the fsck
http://sourceforge.net/p/backuppc/mailman/message/34478542/
I guess that's when it removes files that are not in any current
backup. Apparently it only takes a long time when you have a very
large number of files.
Yes, it is "normal" but it doesn't seem to be ideal, and I don't think
it should be required. I started looking at this, but then got side
tracked with the more serious bug in rsync_bpc. Will eventually come
back to this, but it means trying to understand the underlying
principles of how BPC works on the pool and pc trees.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Adam Goryachev
2016-01-18 01:52:19 UTC
Permalink
Post by Gandalf Corvotempesta
Post by Adam Goryachev
I would suggest anyone considering BPC should use v3 unless they
specifically need the v4 features, and even then, are prepared for the
possible shortcomings inherent in using alpha software.
I can't use v3 because i'm having servers with millions of file to be
transfered and v3 takes ages in "Building files list".
v4 uses native rsync and incremental file list. This is faster,
transfer start immediatly, not after many many hours.
BTW, 1 server was backuped properly, now is running "fsck #0". As I
understood from docs, BackupPC should not run when
fsck in running. I'm having a full dump of srv1, and srv2 that is
doing "fsck #0". Is this bad or fsck can be run parallel with other
backups?
AFAIK, when you see fsck #0 on the web interface, it is actually doing a
refCountUpdate, which just updates the current host, therefore it
doesn't matter if other hosts are running a concurrent backup.
Post by Gandalf Corvotempesta
Usually how long does it take to complete? this is slowing down the
other backup that seems to be frozen.
This is dependant on the number of files in the backup, and especially,
your disk I/O performance. You will probably want to think about RAID10
or similar for your backup server in order to massively improve
performance (or use bigger caches, either hardware raid cache, or allow
linux to cache more). Equally, your filesystem might need tuning, or
changing to a different FS might help too.

Running multiple concurrent backups might be slower than running one at
a time, depending on the speed of the client, and the performance of
your BPC server. Remember, the disk can only read or write from one
sector at a time, and random performance is significantly worse.

Regards,
Adam
--
Adam Goryachev Website Managers www.websitemanagers.com.au

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Gandalf Corvotempesta
2016-01-12 01:08:40 UTC
Permalink
Post by Adam Goryachev
A full can certainly happen after a failed incremental, we don't know
why. Again, the time looks very strange, how was this initiated? Can you
provide copies of your configs? The detailed backup logs?
I can post this, for the incremental: http://pastebin.com/raw/v74rGTFq

There is no backup #0 here, only #1 and #2, both identical (BPC is
copying #2 in #1).

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-users mailing list
BackupPC-***@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Continue reading on narkive:
Loading...