Quantcast
Channel: Piwik Forums - Support & Bugs
Viewing all articles
Browse latest Browse all 4243

SQL Error - and what I did seems to have fixed it? (no replies)

$
0
0
First time setting up piwik. We have 16 web servers in a cluster we need to track stats on.

First task: get 2014 data into the system. Because 'mine not to reason why'.

Extracted logs from backup to /var/log/2014/$servername

Ran this command

/path/to/stats/log-analytics/import_logs.py --token-auth=<API from server> /var/log/2014/$servername -url=myurl.com -idsite=1 --recorders=16 --enable-http-errors --enable-http-redirects --enable-static --enable-bots


It chugged along for a while them halted

'Fatal error: HTTP Error 500 Internal Server Error, response: {"status":"error","tracked":0}
You can restart the import of "/var/log/2014/$servername//blah.gz" from the point it failed by specifying --skip=18901 on the command line."


At the same time this was written to httpd's error log

[error] [client 10.210.2.26] Error in Piwik (tracker): Error query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction                                 In query: UPDATE piwik_log_visit SET idvisitor = ?, visit_total_time = ?, visit_last_action_time = ?, visit_exit_idaction_url = ?,  visit_total_actions = visit_total_actions + 1 , custom_var_k1 = ?, custom_var_v1 = ? WHERE idsite = ? AND idvisit = ?                                 Parameters: array (   0 => 'J\x96F\x86\xe2\xa8pa',   1 => 8554,   2 => '2014-01-11 11:59:09',   3 => 230633,   4 => 'Bot',   5 => 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',   6 => 1,   7 => 22530, )


This happened pretty frequently.

Following the advice here at http://stackoverflow.com/questions/5836623/getting-lock-wait-timeout-exceeded-try-restarting-transaction-even-though-im, I set these values in sql

mysql> SET tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)

mysql> SET GLOBAL tx_isolation = 'READ-COMMITTED';
Query OK, 0 rows affected (0.00 sec)


And restarted. That was an hour and 20 minute ago, it has not failed yet, although it _has_ written that error to log.

Comments? Am I missing a 'how to make SQL act better with large sets of data' guide?

Viewing all articles
Browse latest Browse all 4243

Trending Articles